Search Results

Search found 41497 results on 1660 pages for 'fault'.

Page 721/1660 | < Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking? Thanks in advance. Scott

    Read the article

  • How to remove an "extra" (unwanted) network from a windows 2008 failover cluster?

    - by Trondh
    Hi, We had a severe crash on one node of our 2-node Windows 2008 / Exchange 2007 CCR Cluster the other day, and i tried to rebuild the node from scratch. I'm using this as a rough outline: http://edmckinzie.spaces.live.com/Blog/cns!687C72A5909E4230!508.entry?sa=641979772 The problem: Our cluster was originally setup with only one NIC per host, as this is supposedly supported in Win2008 (no dedicated heartbeat NIC). When I add my freshly installed node to the cluster, it shows up with two cluster networks, "Cluster Network 1" & 2. The existing node's NIC has been placed in one cluster network and my fresh installed has ended up in the other. I can't find anywhere in the GUI to choose which cluster network each physical NIC should be part of, but i KNOW I have done this before. Time is of the essence on this one, so I was hoping someone in here had the answer on the top of their head... Thanks for any pointers. regards, Trond Hindenes

    Read the article

  • Rack layout tools

    - by Luke
    I'm wondering if there's any tools (preferably offline) that would allow me to layout all of the new equipment that will be going into several standard racks. Currently I'm using Excel to map out all of the slots columns for the data but I suspect that there is some better method of doing this. Suggestions? Edit: Dell has an online tool, but doesn't seem very good at actually saving the data that you're working on (and obviously it's geared towards Dell hardware).

    Read the article

  • RAID administration in Debian Lenny

    - by Siim K
    I've got an old box that I don't want to scrap yet because it's got a nice working 5-disk RAID assembly. I want to create 2 arrays: RAID 1 with 2 disks and RAID 5 with the other 3 disks. The RAID card is Intel SRCU31L. I can create the RAID 1 volume in the console that you access with Ctrl+C at startup. But it only allows for creation of one volume so I can't do anything with the 3 remaining disks. I installed Debian Lenny on the RAID 1 volume and it worked out nicely. What utilites could I now use to create/manage the RAID volumes in Debian Linux? I installed the raidutils package but get an error when trying to fetch a list: #raidutil -L controller or #raidutil -L physical # raidutil -L controller osdOpenEngine : 11/08/110-18:16:08 Fatal error, no active controller device files found. Engine connect failed: Open What could I try to get this thing working? Can you suggest any other tools? Command #lspci -vv gives me this about the controller: 00:06.1 I2O: Intel Corporation Integrated RAID (rev 02) (prog-if 01) Subsystem: Intel Corporation Device 0001 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - <MAbort- >SERR- <PERR- INTx- Latency: 64, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: Memory at f9800000 (32-bit, prefetchable) [size=8M] [virtual] Expansion ROM at 30020000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: PCI_I2O Kernel modules: i2o_core

    Read the article

  • racoon-tool doesn't generate full racoon.conf file in /var/lib/racoon/racoon.conf

    - by robthewolf
    I am using ipsec-tools/racoon to create my VPN. I am using racoon-tool to configure racoon.conf but when I run racoon-tool reload it only generates the first section - Global items. When I run racoon-tool I get: # racoon-tool reload Loading SAD and SPD... SAD and SPD loaded. Configuring racoon...done. This is the entire file /var/lib/racoon/racoon.conf # # Racoon configuration for Samuel # Generated on Wed Jan 5 21:31:49 2011 by racoon-tool # # # Global items # path pre_shared_key "/etc/racoon/psk.txt"; path certificate "/etc/racoon/certs"; log debug; I cannot find anywhere a solution as to why this is happening. Please help

    Read the article

  • migrate SharePoint to SBS Server

    - by Eric Lorson
    We have a SharePoint 2003 server and we need to migrate that data to SharePoint 2011 on a SBS server. We cannot use the migration tool because one of the servers is SBS and the other is not. We exported the SharePoint data from the old system, but the import to the SBS SharePoint is failing with very little info on why. I think that there is a schema conflict, but I am not that familiar with SBS and I am not finding the error in the Windows logs. Has anyone had to migrate data from non-SBS system to an SBS system? Or can anyone help me figure out where to look for more info on what is going on?

    Read the article

  • how to I change the owner of a folder on my server?

    - by Ashley Ward
    OK I'll be more specific - I have uploaded a bunch of folders via ftp. These now all have the the owner name of the account which I logged into FTP using. How do I change the owner to be the server name? and How do I find out what name the server is using? I'm pretty new to server permissions and the like, so please be gentle :) BTW, I'm using a linux server.

    Read the article

  • Commerce Server 2009 with SharePoint 2010 experiences

    - by rsteckly
    Hi, I'm trying to decide to between using MojoPortal for my organizations CMS or Commerce Server 2009 with SharePoint 2010. We already have SharePoint 2010 for our intranet. In that thinking, perhaps it would make sense to deploy the same technology? We do not have a lot of traffic but do need basic e-commerce functionality. I haven't really found a lot of documentation for Commerce Server 2009. It would have to share the same server with SharePoint 2010. I'm not worried about that because of the low traffic. I'm worried about how difficult it is to install. Is it a nightmare product to install or is it pretty straightforward? Is it unrealistic for it to share a server with SharePoint 2010, even in relatively low traffic? Any experiences with administering MojoPortal? Thanks!

    Read the article

  • How to access Windows Server 2008 R2 file shares from a different subnet

    - by Lloyd Cotten
    We have a couple of severs that used to be Windows Server 2003 that we recently upgraded to Windows Server 2008 R2. A couple of details to set the situation up: We wiped the OS and re-installed. These servers are on one subnet (172.16.x.x) and we are trying to access some file shares on them from another subnet (10.34.x.x). Firewall is disabled on these servers. Trying to access with UNC "\172.16.x.x\sharename" and net use \172.16.x.x However, we're having problems doing this. We are getting "The network path was not found". Here's some of the things we've tried so far and the result: Tried accessing the share from other (non-2008) servers on the same subnet... Success! Ping servers from different subnet... Success! Telnet connection into port 139 from different subnet... Success! Took a scan through Local Security Policies to see if something obvious needed to be enabled / disabled / configured... Fail I'm not sure where to look next. I know that the router between the two subnets is locked down pretty good, but this did work for our 2003 servers. Has anything changed in the way of ports used for UNC / file share access in 2008? Maybe I'm missing some security policy setting? Hoping somebody can take pity on a poor programming guy that can't figure out something really simple. :-) Thanks!

    Read the article

  • apache2 slow responding (debian)

    - by baloo
    I'm running an apache2 2.2.9 webserver with modpython and mpm_worker_module. The current config for the mpm is ServerLimit 32 StartServers 10 MaxClients 800 MinSpareThreads 25 MaxSpareThreads 75 ThreadsPerChild 25 MaxRequestsPerChild 0 The server has 1G of ram and a 100Mbit connection. Checking netstat -na | grep ESTABLISHED | wc -l gives me a number between 50 - 60. The load is about 1.0 Every pageload is also cached by memcached. I can't see why the server is so slow in responding to new connections, sometimes droping them completely? Also tried disabling iptables to make sure it's not because of a full state table or something like that. The only thing in dmesg is a lot of spam about "TCP: Treason uncloaked!"

    Read the article

  • recommendations for efficient offsite remote backup solution of vm's

    - by senorsmile
    I am looking for recommendations for backing up my current 6 vm's(and soon to grow to up to 20). Currently I am running a two node proxmox cluster(which is a debian base using kvm for virtualization with a custom web front end to administer). I have two nearly identical boxes with amd phenom II x4's and asus motherboards. Each has 4 500 GB sata2 hdd's, 1 for the os and other data for the proxmox install, and 3 using mdadm+drbd+lvm to share the 1.5 TB's of storage between the two machines. I mount lvm images to kvm for all of the virtual machines. I currently have the ability to do live transfer from one machine to the other, typically within seconds(it takes about 2 minutes on the largest vm running win2008 with m$ sql server). I am using proxmox's built-in vzdump utility to take snapshots of the vm's and store those on an external harddrive on the network. I then have jungledisk service (using rackspace) to sync the vzdump folder for remote offsite backup. This is all fine and dandy, but it's not very scalable. For one, the backups themselves can take up to a few hours every night. With jungledisk's block level incremental transfers, the sync only transfers a small portion of the data offsite, but that still takes at least a half an hour. The much better solution would of course be something that allows me to instantly take the difference of two time points (say what was written from 6am to 7am), zip it, then send that difference file to the backup server which would instantly transfer to the remote storage on rackspace. I have looked a little into zfs and it's ability to do send/receive. That coupled with a pipe of the data in bzip or something would seem perfect. However, it seems that implementing a nexenta server with zfs would essentially require at least one or two more dedicated storage servers to serve iSCSI block volumes (via zvol's???) to the proxmox servers. I would prefer to keep the setup as minimal as possible (i.e. NOT having separate storage servers) if at all possible. I have also briefly read about zumastor. It looks like it could also do what I want, but it appears to have halted development in 2008. So, zfs, zumastor or other?

    Read the article

  • How to forward UDP and TCP traffic from one IP to another

    - by Rishabh Agnihotri
    Well i have a server with two LAN Card Installed.I have a server in U.S and one in India.I have created a GRE tunnel to route all traffic from U.S Server to my Indian Server.My Traffic has UDP,TCP,HTTP,etc Traffic.Now i have two LAN Card on my Indian Server.Well i have configured two IPs on the system for some of my needs on the system.One is a /30 and another is a /24.Well now i want the /30 IP to talk to my /24 IP.Lets take a e.g the IPs are 180.151.130.34 - /30 and 103.243.19.254 -/24 I want to forward all the TCP,UDP,HTTP,etc like traffic coming to 180.151.130.34 to 103.243.19.254.In the sense i want to make them talk to each other in a way if a TCP/UDP Packet comes to 180.151.130.34 it should be forwarded to 103.243.19.254 and then that packet is sent back by 103.243.19.254 to 180.151.130.34.I am not able to configure this part.Can anyone tell me step by step how to do so? Well i forgot to specify i am using Windows Server 2008. Any help would be greatly appreciated.Thanks in advance.

    Read the article

  • Any ideas for automatic initial/scripted configuration of a NetApp?

    - by maddenca
    The background is to make a solution where we can automate as much as possible the building up of ‘pods’ that include server/network/storage and that will be built at remote sites. In an ideal world would be that we create a single management server which is preconfigured with DHCP/TFTP/or whatever. This management server is racked with a CISCO UCS, FAS31x0, etc. at a build site and is then transported to the final customer site where on power it almost configures itself, or at least bootstraps itself far enough that a remote skilled 'expert' can complete the setup of the pod. Ideas (doesn't have to resemble 100% of the above) would be helpful.

    Read the article

  • In spite of correct DNS, Exchange sending to wrong destination server for single outbound domain

    - by beporter
    My company uses an SBS 2003 server and makes use of Exchange to host our own email. We also have a linux server hosting domains for some of our clients. In order for us to send to those clients, we had internal DNS set up to shadow the client domains to provide "correct" MX records inside our network. For example, public DNS for a domain abc.com might point to 1.2.3.4, but internally we have MX records set up to route mail for abc.com to 172.16.0.4, which is the linux email server. This setup was entirely functional; this is just back story. We've recently moved one of our client domains from our internal linux server to an external email provider. When we did that, we naturally deleted our internal shadow DNS records so our Exchange server would fetch correct (public) DNS records and route mail out to the new external host. This has NOT had any effect on Exchange though. Even after rebooting the Exchange server and completely flushing the DNS cache (nslookups on the Exchange machine itself correctly resolve to the new external address) Exchange still attempts to deliver messages for the domain to our internal server! Exchange correctly routes to all other internal and external domains when sending email. Somehow Exchange is trying to deliver to a machine that by all accounts it has no business trying to use for just this one domain. Is there a DNS cache that Exchange uses internally? Is there a way to flush that internal cache? What else could I be missing?

    Read the article

  • problem in exporting a partition on a usb stick as nfs volume

    - by Bond
    I have a USB disk which has 2 partitions.I exported one of them (on NFS) and now I am trying to mount it at client machine. Each time it gets error mount -t nfs 192.168.1.19:/media/vol2 /mnt/nfs/ mount.nfs: access denied by server while mounting 192.168.1.19:/media/vol2 Here is the /etc/exports file entry showmount -e on nfs server gives Export list for bond: /media/vol2 */24 On the client machine nfs-client package is installed. What more I need to check? Is it logged some where?

    Read the article

  • Can I move a one drive RAID 0 array on a PERC 6 to another server?

    - by zippy
    We have a Dell Poweredge 2970 with a PERC 6/i RAID controller. We have a one drive RAID 0 array (we wanted to add the drive as a JBOD but the PERC forces you to create an array to access it from the PERC). Can we take the one drive RAID 0 and move it to a new server (one that doesn't have a PERC)? Since there's only one drive in the "array" there's no striping going on...the only issue would seem to be if the PERC has some metadata on the drive that would prevent Windows from reading it. Does anyone have any experience with this scenario?

    Read the article

  • How (in)secure are cell phones in reality?

    - by Aron Rotteveel
    I was recently re-reading an old Wired article about the Kaminsky DNS Vulnerability and the story behind it. In this article there was a quote that came across a little bit exaggerated to me: "The first thing I want to say to you," Vixie told Kaminsky, trying to contain the flood of feeling, "is never, ever repeat what you just told me over a cell phone." Vixie knew how easy it was to eavesdrop on a cell signal, and he had heard enough to know that he was facing a problem of global significance. If the information were intercepted by the wrong people, the wired world could be held ransom. Hackers could wreak havoc. Billions of dollars were at stake, and Vixie wasn't going to take any risks. When reading this I could not help but feel like it was a bit blown-up and theatrical. Now, I know absolutely nothing about cell phones and the security problems involved, but to my understanding, cell phone security has quite improved over the past few years. So my question is: how insecure are cell phones in reality? Are there any good articles that dig a bit deeper into this matter?

    Read the article

  • Arp tries on various *nix based systems.

    - by salparadise
    Does anyone know what determines the amount of arp tries a router will make? I have different behaviors with two devices, if I try to traceroute to a non-existent host on a subnet that belongs to an interface on the router, a Linux box will try to arp 3 times and then return a host un-reacheable icmp message. Junos will continuosly try to arp and not return anything. Is there a sysctl value that determines this or anything at all.

    Read the article

  • Raid 5 GPT Partitioning

    - by user39325
    Hi, i have a Dell Poweredge r710 server with five 1 TB disks. All of them are in RAID 5. I was trying to install Centos but it sais "Your boot partition is on disk using GPT Partition...". I read somewhere that centos cant install on 2TB disk, so i made some partiotions smaller, but it's not workin. any idea? p.s. i am going to install Proxmox on that, but Proxmox same doesnt accept 2TB disks...

    Read the article

  • Enable Remote Desktop File Transfer

    - by Eric J.
    Most of the servers I RDP to support cut-and-paste file transfer (from my Win7 64 machine). One does not, and I can't figure out what configuration step is missing. I followed the steps outlined here: http://support.microsoft.com/kb/313292 but do not see the local file system on the remote server (or vice versa for that matter), and I cannot cut and paste files from the local to the remote system. If I try to cut and paste from the remote system to the local one, I get the error Cannot copy FILENAME: Windows cannot find '%1!ls!'. Check the spelling and try again, or try searching for the item by clicking the Start button and then clicking Search. The remote server is Windows Server 2003 Standard Edition.

    Read the article

  • SQL Server 2008 Replication Promotion

    - by Stefan Mai
    I have a 4 node cluster, 1 subscriber and 3 publishers, all running SQL Server 2008 R2 Enterprise. The intention is that if the subscriber goes down, we can use one of the publishers to quickly build up its replacement. Our testing reveals a problem though: the subcriber databases all have Not For Replication set to Yes on the identity columns so that they can maintain the identity set in the subscriber. This causes a problem when they become subscribers because now we don't have identity insert functionality: we get a primary key error. Any way to "promote" a subscriber to publisher?

    Read the article

  • Rewriting Apache URLs to use only paths and set response headers

    - by jabley
    I have apache httpd in front of an application running in Tomcat. The application exposes URLs of the form: /path/to/images?id={an-image-id} The entities returned by such URLs are images (even though URIs are opaque, I find human-friendly ones are easier to work with!). The application does not set caching directives on the image response, so I've added that via Apache. # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Note that I can't use ExpiresByType since not all images served by the app have versioned URIs. I know that ones served by the /path/to/images resource handler are versioned URIs though, which don't perform any sort of content negotiation, and thus are ripe for Far Future Expires management. This is working well for us. Now a requirement has come up to put something else in front of the app (in this case, Amazon CloudFront) to further distribute and cache some of the content. Amazon CloudFront will not pass query string parameters through to my origin server. I thought I would be able to work around this, by changing my apache config appropriately: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> This works fine in terms of serving the content, but there are no longer caching directives with the response. I've tried playing around with [PT], [P] for the RewriteRule, and adding a new LocationMatch directive: # Rewrite to map new Amazon CloudFront friendly URIs to the application resources # /new/path/to/images/12345 -> /path/to/images?id=12345 RewriteRule ^/new/path/to/images/([0-9]+) /path/to/images?id=$1 [PT] # LocationMatch to set caching directives on image responses <LocationMatch "^/path/to/images$"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> <LocationMatch "^/new/path/to/images/"> # Can't have Set-Cookie on response, otherwise the downstream caching proxy # won't cache! Header unset Set-Cookie # Mark the response as cacheable. Header append Cache-Control "max-age=8640000" </LocationMatch> Unfortunately, I'm still unable to get the Cache-Control header added to the response with the new URL format. Please point out what I'm missing to get /new/path/to/images/12345 returning a 200 response with a Cache-Control: max-age=8640000 header. Pointers as to how to debug apache like this would be appreciated as well!

    Read the article

  • Determine logged on user on Windows computer from Linux

    - by Justin
    How can I determine who is logged on to a remote Windows XP computer from Linux? I do not have administrator access on the domain or on the remote computer. I can do it from a separate Windows computer using PsLoggedOn -L \\computer from PsTools I've tried using nmblookup -A remotecomputer, but I only see entries for the computer and the domain, not a <03> entry for the user. I've also tried running PsLoggedOn under wine; I get an error: Connecting to Registry of \\computer.company.com... fixme:reg:RegConnectRegistryW Connect to L"computer.company.com" is not supported. I started looking into winexe, but it looks like I would need administrative rights on the remote computer to get it working.

    Read the article

< Previous Page | 717 718 719 720 721 722 723 724 725 726 727 728  | Next Page >