Search Results

Search found 9091 results on 364 pages for 'thinking'.

Page 261/364 | < Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >

  • How would I setup iMail to forward a user's mail to another service w/o leaving a copy locally?

    - by Scott Mayfield
    I have an iMail 2006 server installation in which I have a particular user that has several aliases that all point to a single user (me, for the record). I've been copying all of my mail to GMail and reading it there, but it annoys me that I have to go back weekly and log into my mail account on iMail and delete between 6 and 10 thousand copies of messages I've already received, in order to keep my mailbox from filling up (yes, I have it set with no quota, but I consider it bad form to just let the box grow indefinitely). I've got the copying setup via an inbound user rule, but I'm wondering how to accomplish a "copy and delete" rule. The manual isn't clear on what happens with multiple matching rules (will they be processed in order, or is it a first match situation?) and there isn't a means to combine multiple actions into a single rule. If I use the "forward" action, I THINK that it's going to screw up all the sender information once the mail reaches my GMail account and show it as coming from me instead of the original senders (can anyone confirm that this is accurate?) An easy answer would be to delete my user account entirely, replace it with an alias that maps to my GMail account, but then I would lose my ability to log into the system for admin duties. So that leads me to creating a second, lesser known account for admin use, but since it's a real account, sooner or later I'm going to get mail sent to it and I'll be back to the same situation of having a user account that doesn't get emptied periodically. I imagine I can set the quota to 0 MB to cause all incoming mail to my admin account to bounce, or setup an inbound rule to bounce everything, but this is starting to sound kludgy to me. Does anyone know of a more direct work around to copying a user's incoming mail to an outside server and then deleting the local copy w/o removing their account entirely? Or is this just wishful thinking?

    Read the article

  • Hardware needed to route between two networks over wireless

    - by AptDweller
    I recently rented an apartment about 100 yards from my brother's house. I have line of sight to his house and can pick up his home AP signal with one of my two laptops if I go out on my balcony (facing his house) or put the laptop by the window. The other laptop will sometimes see the SSID broadcast, but fails to connect, drops, etc. We would like to set up a persistent wireless connection between our homes. We would prefer each network be logically segmented as independent networks, but he will share his internet connection. I've got a bunch of tv shows saved to a NAS by my TiVO that I'd like to make available to him across the wireless link. My brother strongly prefers to not mess with his WAP at all. His network is running fine and is afraid to mess it up. I guess you could say he is "technologically declined". If we can get a reliable 11Mbps connection we will be satisfied. What hardware do I need to make this work? I was thinking a router with two wireless interfaces (external antennas) a wired interface, and a directional antenna mounted on my balcony facing his house. Can anyone recommend hardware to make this happen? Cheaper is better. I'll only be living in the area a year or two. I do have an old satellite TV antenna if that can be used to direct the signal.

    Read the article

  • Windows Media Center showing Jerky Video on PC

    - by Kris Erickson
    I had to repave my Windows 7 x64 box last week due to a hard drive crash, and for a while everything was running perfectly but now all videos in Windows Media Center are jerky (the sound is fine, they just seem to skip a ton of frames all the time). This is on the local machine, but the same thing happens when I try to stream to my Xbox. The videos all show fine in VLC and Windows Media Player. I guess I must have installed something recently (in the process of getting all the apps I usually have running on my PC) that caused this but for the life of me I can't figure it out. I have updated to the latest video driver (and then rolled back to the standard Windows 7 driver), I have rolled back all the other drivers that I have installed (I believe). I have uninstalled all the codec packs (I also run TVersity, so I hate the TVersity codec pack installed), and I uninstalled TVersity. Nothing seems to help. I have uninstalled windows media center, and reinstalled it from the Programs and Features. I have basically ran out of things to try to fix this, and am almost thinking about reinstalling Windows again. Any suggestions?

    Read the article

  • Virus cleanup + dying drive = XP Automatic Updates crashing in esent.dll

    - by quack quixote
    Background I'm doing system recovery on an old WinXP SP1 system brought to me on suspicion of virus infection. After taking preliminary backups, I used MalwareBytes to detect and clean the infection. I might've even gotten it all. In the process, I've discovered (a) the system drive is showing signs of impending failure, and (b) the owner has been using the system's old crusty IE-6 instead of the up-to-date Firefox I've provided for him. So naturally, thinking I had a relatively stable system, I tried to hit the Windows Update site to install IE-8, in case further training doesn't stick. The update site told me it needed to update the installer, and I started that process. Soon after, wuauclt.exe started crashing, reporting addresses in module esent.dll. There's a Microsoft KB (910437) on a problem with that DLL, so I downloaded the hotfix and installed. The crashing did not stop. I attempted to install SP3 from the offline installer, but that didn't fix the issue either. The system is reporting a few hard drive / IDE controller errors, but they don't correlate to the crashes, so they aren't the direct cause. I've also attempted to rollback to the time between the infection removal and the first crashes, but that doesn't help. Question The hotfix I tried to install dealt with problem in transaction logs of the Extensible Storage Engine (ESE) database. I suspect this issue is similar, but that the database itself (whatever the ESE database is) is corrupted. Is there a way to clean or clear this database so that system operation returns to normal? Can someone enlighten me as to what the ESE database actually is, and where it resides? Can I just locate some files and delete them to bring this under control?

    Read the article

  • How do I setup routing for 2 companies with different Internet connections on the same LAN?

    - by Clint Miller
    Here's the setup: 2 companies (A & B) share office space and a LAN. A 2nd ISP is brought in and company A wants it's own Internet connection (ISP A) and company B wants it's own Internet connection (ISP B). VLANs are deployed internally to separate the 2 company's networks (company A: VLAN 1, company B: VLAN 2, shared VOIP: VLAN 3). With separate VLANs it's simple enough to use separate DHCP servers (or separate scopes on the same server) to assign the default gateway to each company's gateway for their Internet connection. Static routes can be created on each gateway to point traffic destined for the other company's VLAN or the voice VLAN so that all nodes are reachable as expected. However, I think this is a form of asymmetrical routing, right? (The path from node A1 to node B1 is not the same as the path back from node B1 to node A1). Can I setup policy-based routing to correct this? In that case, can I assign the same default gateway to every device on all VLANs and create a routing policy on a L3 switch to look at the source address and forward traffic to the appropriate next hop? In that case, I want the routing logic to go like this: If the destination address is known, forward the traffic (traffic destined for a different VLAN). If the destination address is unknown, forward the traffic to ISP A's gateway if the source address is on VLAN A; or forward the traffic to ISP B's gateway if the source address is VLAN B. Am I thinking about this problem in the correct way? Is there another way to solve this problem that I am overlooking?

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • No headphone or speakers plugged in - Windows 7 issue

    - by Amit Ranjan
    I am facing a wierd issue between my sound card driver and Windows7(any edition). I have a sony vaio notebook (VPCEB24EN). Two days ago, on start I got diabled wifi, charging and speakers. Then I restart the PC and everthing worked fine. Later on I again restarted my machine, and then I found the my speakers were not working. I thought, restart will work. I restarted many more times but it did'nt work. I searched google and found that , it might be due to : 1. Hardware is not switched on from bios. 2. No hardware. 3. overriding 64-bit drivers with 32-bit drivers. To make it working, I restored my laptop from scratched, but while restoring pc, the realtek HD drivers, it gave me an error 505. I then formatted the drive and installed Win7 Ultimate 32bit (With PC was, Win7 64-bit Home Basic). I got lots of yellow exclamation in device manager, thinking now this will resolve my issue. But Even after the installing all drivers on a fresh installation, I was still with the same position. A red cross on speaker- No Speakers or Headphone plugged in. Please Note: My Laptop is Vaio , E Series, VPCEB24EN. Audio : Intel® High Definition Audio compatible but accepts Realted Audio. While using BIOS Agent, I got Intel Chipset 5 Series Audio Adapter and ATI RV370 Audio adapter found on my board. Installed is Win7 32bit Ultimate. factory default was Win7 64-bit Home Basic Memory: RAM 3GB/ 320GB HDD Display : ATI Mobility Radeon™ HD 5145 Graphics

    Read the article

  • When did my LaTeX files become TeX files!?

    - by andrz_001
    After transferring all my files onto a new machine, all files that were once LaTeX files (having the texnic center "T" icon) are now TeX files (having the TeX works icon --I don't remember installing that one!). But....the files are associated with the technic center! In other words, the files open with the texnic center, yet the "type of file" is "tex document". I'm using the same miktex distrubution and texnicCenter (both for windows xp) as on the old machine. Again, I don't know how/where I got this texworks on the new machine--assuming it came with windows. A question: Could these tex associations cause me any new surprises, anything unexpected? Like certain packages not working, etc? Because I can't have that!! Should I not fix it if it ain't broken!? For instance, one particular file yesterday would not produce any pdf output after compiling...after trying many things (other files compiled as they should have), I got to thinking to try it without the setspace for doublespacing!! And it worked. I have no idea why. Anyway.... Real question: How to revert to LaTeX file associations?? Rhetorical question: Would this question have better luck in ctan or stackoverflow? I guess I'll find out! Thank you wholeheartedly!

    Read the article

  • Client certificate based encryption

    - by Timo Willemsen
    I have a question about security of a file on a webserver. I have a file on my webserver which is used by my webapplication. It's a bitcoin wallet. Essentially it's a file with a private key in it used to decrypt messages. Now, my webapplication uses the file, because it's used to recieve transactions made trough the bitcoin network. I was looking into ways to secure it. Obviously if someone has root access to the server, he can do the same as my application. However, I need to find a way to encrypt it. I was thinking of something like this, but I have no clue if this is actually going to work: Client logs in with some sort of client certificate. Webapplication creates a wallet file. Webapplication encrypts file with client certificate. If the application wants to access the file, it has to use the client certificate. So basically, if someone gets root access to the site, they cannot access the wallet. Is this possible and does anyone know about an implementation of this? Are there any problems with this? And how safe would this be?

    Read the article

  • Cheap desktop computer in 19" rack-mountable form-factor?

    - by Alex Basson
    I'm a high school teacher at a small private school. As of this year, we have SMARTBoards in every classroom (though I've had one in the class I share for two years now). The classrooms themselves don't have computers in them, so we teachers bring our laptops to class and connect them to the boards. This has several disadvantages: This takes a few minutes while we wait for the board to boot up and then orient the board to our individual laptop -- we have to do this every time b/c different teachers have different laptops requiring different orientations. This isn't ideal because when you only have 43 minutes per class period, waiting five minutes just to get started is a real waste. Carrying your laptop to class doesn't sound so bad until you consider that we're also carrying textbooks and piles of student papers, and we're carrying it all through crowded high school hallways. More than one laptop has fallen THUNK to the floor, with dire consequences. We feel we could eliminate the need to use our laptops with the SMARTBoards if we had a dedicated computer in each classroom hooked up to the board at all times. Each board set-up is connected to a podium with a standard 19" rack in it, currently housing a power supply and DVD player. There're plenty of rack spaces available. So I'm thinking: maybe we could get some inexpensive computers in a 19" rack-mountable form factor, install them in the podiums, and connect them to the boards on a permanent basis. Any suggestions?

    Read the article

  • Can you transfer a Windows 7 XP-mode VHD to a new PC without the need for reactivation?

    - by Henk
    Hi. At the moment I am running Vista 32 bit and use VPC 2007 a lot. I have several W2000 VMs set up with complex software configurations. For me, one of the advantages of working with VPCs is that a VM can easily be copied to another PC (like a notebook). Also, when I move to a new PC, I can immediately work with my existing VMs without the need to reinstall all applications. I am thinking about buying a new pc with Windows 7 64 bit pro, and using the xp-mode VM that comes with Windows 7. The advantages would be support for more memory (so being able to run more VM's simultaneously) and having a licenced XP VM. I would like to know if you can transfer a VHD that is based on the xp-mode VHD to another machine without Activation issues. The other machine will of course, also be running W7. I would guess that this would not be a problem. However, I did read about a guy who had to reinstall his W7 because of a harddisk failure, and could not access his xp-mode VHD anymore because it demanded re-activation. Thanks. Henk.

    Read the article

  • Using a Level 2 switch as a core switch

    - by imtech
    I have a small user base of about 20 people on at a time and spiking up to about 80 people during peak times. Most people (80+%) are connected over our Aruba managed wireless system. We have a Windows Domain. We have 3 24-Port switches all connecting back to a central 48-port switch where additional access ports, firewall, servers, and wireless controller all centrally connect back to. It's a flat network with dumb switches. I'm in the process of upgrading our infrastructure. Cisco pricing for switches is pretty high for us so I've been looking at HP Procurves which seem to be within our budget range. I want to eventually make use of 802.1x, SNMP, QoS for possible VOIP upgrades, VLAN to separate guest VLAN from authenticated users, and other more advanced features. PoE would be nice but that's probably too expensive for us. I was thinking of having our core switch be a Procurve 2610 and the rest of our switches that centrally connect to it be Procurve 2510s. A true and full blown level 3 switch is way out of our price range but a 2610 seems to be good enough for us. The 2610 does static routing which ought to be good enough for us but I'm in unfamiliar territory so I'm looking for any gotchas. Also, should all the switches be 2610s or just the core switch? Do I even need the 2610, can I just go with all 2510s? I'm new to VLANs as well so I'm not sure what it is I need but I would like an affordable infrastructure that won't need replacing 2-3 years down the line because I choose a product that was lacking.

    Read the article

  • tail -f and then exit on matching string

    - by Patrick
    I am trying to configure a startup script which will startup tomcat, monitor the catalina.out for the string "Server startup", and then run another process. I have been trying various combinations of tail -f with grep and awk, but haven't got anything working yet. The main issue I am having seems to be with forcing the tail to die after grep or awk have matched the string. I have simplified to the following test case. test.sh is listed below: #!/bin/sh rm -f child.out ./child.sh > child.out & tail -f child.out | grep -q B child.sh is listed below: #!/bin/sh echo A sleep 20 echo B echo C sleep 40 echo D The behavior I am seeing is that grep exits after 20 seconds , however the tail will take a further 40 seconds to die. I understand why this is happening - tail will only notice that the pipe is gone when it writes to it which only happens when data gets appended to the file. This is compounded by the fact that tail is to be buffering the data and outputting the B and C characters as a single write (I confirmed this by strace). I have attempted to fix that with solutions I found elsewhere, such as using unbuffer command, but that didn't help. Anybody got any ideas for how to get this working how I expect it? Or ideas for waiting for successful Tomcat start (thinking about waiting for a TCP port to know it has started, but suspect that will become more complex that what I am trying to do now). I have managed to get it working with awk doing a "killall tail" on match, but I am not happy with that solution. Note I am trying to get this to work on RHEL4.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • CentOS Installation on a Cisco MCS 7800

    - by William
    I'm having some problems installing CentOS 5.5 Final (i386) onto my server, a Cisco MCS 7800. The problem comes very early into the installation. When the welcome screen comes up and gives you the option on how to boot into the DVD, I'll press enter to go into the graphical installer. The screen will then have a blinking cursor in the top left of the screen and will never go away (I thought that it just might need time but I let it sit for over 5 hours). I then booted into it again and tried using Linux Text thinking it was a problem with the graphical installer. That didn't work, same problem. Then I tried a DVD of RHEL 5 and got the same problem, both graphical and Linux text. At this point I think it's a hardware problem. The server has 2GB of ECC RAM, 1 Pentium 4 CPU @ 3.06GHZ and 2 WD Hard Drives (80GB) configured for RAID 0. (There is also an option in the BIOS for what OS type and that is set to Linux.) If anyone has any idea what is going on, it would be helpful. Edit Typing "text" doesn't change a thing. Still stuck at the blinking cursor. I looked it up and it's really the same thing as typing "linux text", which as stated in the first part of my question, I've already done.

    Read the article

  • PHP sessions currupt

    - by Baversjo
    Using symfony framework 1.4 I have created a website. I'm using sfguard for authentication. Now, this is working great on WAMP (windows). I can login to several accounts on different browsers and use the website. I have ubuntu server 9.10 running apache (everything up to date and default configuration). On my server, when I login to the website in one browser it works great. When I on my other computer login with another user account on the public website, the login is successful. But when I refresh/go to another page the first user is shown as logged in instead! Also, when I press logout, It's not showing that I'm logged out after page load. When I press f5 again I'm logged out. As mentioned, all this works as expected on my local installation. I'm thinking there something wrong with my PHP session configuration on my ubuntu server, but I've never touched it.. Please help me. This is a school project and I'm presenting it today :(

    Read the article

  • Active Directory + IIS + SQL + ASP.NET

    - by Amira Elsayed Ismail
    I have sent the following question to stackoverflow website I have installed Windows server 2008 r2 on a virtual machine, Can I install Active directory with domain controller + IIS + SQL server on the same machine? I want to make web application and this web application will authenticate users from Active Directory, the web application should be published on the server IIS and the users should access it remotely from their home using domain name of my machine, Someone tell me that its very wrong to have IIS and Active directory on the same machine I got the following Answer You can't use ActiveDirectory over the internet. At least not without something like a VPN as a middle man. Their home computers will not be joined to the domain, so there is no pass-through authentication. Yes, it's a bad idea to put AD on the web server. Why is too complex to get into in an answer here. Suffice it to say that even if you did do this, it's probably would not work the way you are thinking it should. It's not impossible to do this. For instance, many of the Microsoft "Small Businesss" products put IIS, AD, and SQL Server on the same server. But, you kind of have to know what you're doing to configure it securely. Then I add the following comment Thanks for ur reply.so what you think about the best way to do this as I didn't do anything like that before should I install active directory on a machine and IIS on another machine ? and what about SQL should I add it to the same server of active directory ? I didn't mentioned also that it will be Microsoft dynamics server that will access some information about work and i have to read data from axapta also ? also what is VPN and how can I use it to let users access my web application anywhere ? Sorry for my long questions and thanks in advance so please if anyone can help I will be thankful

    Read the article

  • Configure tomcat behind loadbalancer to respond on HTTP and HTTPS

    - by user253530
    I have 2 tomcat machines behind a load balancer on Amazon EC2. Until now The load balancer was configured to respond only on https. So in order to access our services you would go to https://url. Tomcat was configured to listen on 8080 but the connector had additional params that would tell tomcat that it is behind a proxy and that it should respond on HTTPS 443. The connector looks like this: <Connector scheme="https" secure="true" proxyPort="443" proxyHost="my.domain.name" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> What i would like to do is to open port 80 on the load balancer and basically allow traffic on HTTP and HTTPS. I've configured the load balancer to redirect all HTTP traffic to the tomcat machines on port 8088. I was thinking that i could define a new connector so that all HTTPS traffic goes to 8080 and HTTP to 8088. Unfortunately i did not succeed. Here is my connector <Connector port="8088" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" useBodyEncodingForURI="true" URIEncoding="UTF-8" /> Am I missing something? Thanks

    Read the article

  • Monitoring whether Google Apps email address is reachable

    - by Acorn
    Backstory: I bungled things a bit the other day, and inadvertantly deleted the DNS overrides for my domain including the MX records that point to Google Apps, causing 2 days of lost emails. What I want: I want to be able to monitor the email address/account so that I can be alerted if for any reason something has gone wrong and emails aren't arriving. Thoughts: I was thinking there might be a way to test the email without having to send an actual message. Does this exist? This wouldn't help if the DNS has reset itself to a different mailserver would it? The other idea was sending periodic emails to check the address it working. How would you automate this? You'd need to somehow check that the email address had arrived as well as checking if it had bounced. Are there any scripts that exist that would do something like this? What would be the best method? Maybe a combination of checking that the MX records for the domain are set to what they're supposed to be set to, and sending automatic test emails to check that things are still functioning on the Google Apps end?

    Read the article

  • Would an invalid certificate cause an 0x8004010F sync error?

    - by hydroparadise
    We just migrated from Exchange 2003 to 2007 which was a combo primary AD/DNS server and it has not gone smoothly. We are now down to getting a new certificate (a bureaucratic process thats out of my hands) and users getting the 0x8004010F sync issue. We are only using Outlook 2007 as our email client and the sync error appears exactly as so: 9:21:44 Synchronizer Version 12.0.6562 9:21:44 Synchronizing Mailbox '<User>' 9:21:44 Done 9:21:44 Microsoft Exchange offline address book 9:21:44 0X8004010F Now, I have read a number of technet articles on this issue anywhere from adding an A record in the DNS for autodiscover.domain.com to syncing the old OAD to the new OAD. In otherwords, theres lots of thing to try, but trial and error at this point might be hazardous to ther server's health and I am trying to narrow down the list of things to try. What has me thinking that the sync error could be related to the certificate is an event error message that says the following: Microsoft Exchange could not find a certificate that contains the domain name mail.ccufl.org in the personal store on the local computer. Therefore, it is unable to support the STARTTLS SMTP verb for the connector Internet Mail with a FQDN parameter of mail.ccufl.org. If the connector's FQDN is not specified, the computer's FQDN is used. Verify the connector configuration and the installed certificates to make sure that there is a certificate with a domain name for that FQDN. If this certificate exists, run Enable-ExchangeCertificate -Services SMTP to make sure that the Microsoft Exchange Transport service has access to the certificate key. I am not fully clear on how the Exchange Transport Service is related to Syncronization, but my hunch is that it probably not related to there not being a valid certificate. So to recap, would an invalid certificate cause an 0x8004010F sync error?

    Read the article

  • Outlook won't re-connect to exchange after network is re-connected

    - by stan503
    I have a setup at my desk where I connect my computer to a an RJ45 switch that switches between two networks. One network is the corporate network, which is maintained by my company's IT, and the other is my own private network where I do testing (the two networks have to be separated). The corporate network hosts the exchange server where I get e-mail. When I switch from the private network to the corporate network, I expect Outlook to re-connect to the exchange server. However, I have found that sometimes when I come back, Outlook take an extremely long time to re-connect. Send/Receive will give me back the error 'The server is not available' (0x8004011D). It will sit there for 10 minutes to a few hours before it finally re-connects. The only other option is to reboot my computer, which is a huge pain for me since I run multiple VMs on it. This usually happens when I'm connected to the private network for a significant amount of time, so I'm thinking it's because Outlook has cached the network status. Is there a way to force Outlook to do a 'hard' re-connect to the exchange server? I'm using Windows XP SP 3 with Outlook 2007.

    Read the article

  • Website Use Monitoring for 3 People

    - by linkedlinked
    I work in an IT startup with 2 partners, and I'm the programmer/IT guy -- in other words, the work horse. To make a long story short, I'm doing most of the work right now, while they spend all day on Facebook. That's OK, because they're paying my salary, but if the project fails, I'm sure they'll blame me for it (I'm doing my best to make sure that doesn't happen!), and I want some sort of recourse. I already have an app that blocks time-wasters on my local PC, and keeps logs of when the app is enabled (so I can say "I had Facebook blocked from 9am-5pm today.") Is there any way I can get a brief summary of the most heavily visited sites, split up by client PC? At the end of the month, I want to be able to say "You both load Facebook, on average, every 10 minutes. You spend hours a day on Youtube, and haven't opened up our bugtracker in weeks" and maybe have a nifty chart or graph to match it. We have a crappy D-Link router, and no IT budget. They are both on Windows Vista, I run Ubuntu Linux. I don't want to install any monitoring software on their PC, but I'm totally fine with, say, routing all the network traffic through my machine. I guess I can think of lots of ways to accomplish this (telnet into JSSH and list open tabs? log all the DNS requests, per-domain? even thinking of setting up a webcam on my desk and just keeping 5-minute snapshots...), I just don't really know where to start. Any advice is appreciated, thanks!

    Read the article

  • Warning messages while build Apache server

    - by GoinOff
    I am building Apache server 2.4.6 from source and am not sure about a few warning messages I received during the rpm build process. The build completes OK and everything seems fine..BTW, this is on CentOS 5.5... During the make process: /home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr/libtool --silent --mode=install install mod_authn_file.la /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/modules/ libtool: install: warning: remember to run `libtool --finish /usr/local/apache2/modules' What is this warning message about?? remember to run libtool --finish ?? Also, I see this: libtool: install: warning: `/home/johnm/dev/project1/install/linux/BUILD/httpd-2.4.6/srclib/apr-util/libaprutil-1.la' has not been installed in `/usr/local/apache2/lib' I am building Apache in a temp directory but libtools seems to be looking in the wrong place (/usr/local/apache2/lib instead of /home/johnm/dev/project1/install/linux/tmp/usr/local/apache2/lib). This seems like something I can blow off?? In my specfile I set DESTDIR to /home/johnm/dev/project1/install/linux/tmp where the install files are placed: %install export DESTDIR=%{buildroot} make install Both messages appear numerous times during the make process. When I install the rpm on the system, everything appears to work without problems..Thinking I can ignore these messages??? or am I missing something important??

    Read the article

  • What is the best way to connect a 3 switches with a router?

    - by Carlos Morales
    Hello everyone, I'm trying to rebuild the network from my work and I was thinking what is the best way to connect three switches and a router. The router has 4 ports so I thought to connect 2 switches to the router (each switch connected with 2 cables to the router) and then connect the third switch to one of the others with two cables. So is like this, two cables from switch one to the router, two cables from switch two to the router and two cables from switch 3 to switch 1 or 2. So my questions are: Is it better to connect the router to each switch with a cable or the more cables you have the better? If I connect the switch 3 to switch 1 or 2 is it better to connect it with a cable or you get better performance with more cables. If I'm wrong and there is a better or more efficient way to connect them please let me know. The router is a Netgear RP114 (I'll upgrade it to a Sonicwall NSA 240), switch 1 is a Netgear GS748T, switch 2 is a Cisco Catalyst 2924-XL and switch 3 is a D-link DGS-1024D Thank you very much

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

< Previous Page | 257 258 259 260 261 262 263 264 265 266 267 268  | Next Page >