Search Results

Search found 16987 results on 680 pages for 'second'.

Page 464/680 | < Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >

  • How do I keep a bridge enabled on a bonded interface?

    - by jlawer
    I'm working on setting up a pair of CentOS 6.3 servers that will run a couple of KVM vms and have come across a problem setting up a bridge on a bond. I am using Mode 4 (802.3ad) bonding on a pair of stacked Dell Powerconnect 5524 switches connecting to R320 servers. There are 2 links (1 to each switch) that form a Link Aggregation Group (802.3ad / LACP bonding). On top of the bond I have VLAN Tagging. I've verified this is a problem on multiple other bonding modes so it isn't just a mode 4 issue. I am testing what happens when 1 link is dropped (ie switch dies, cable breaks, etc). If I don't have a bridge (for KVM), everything works fine, failover happens as expected. If I have the bridge enabled, it works fine until failover (unplugging a cable). When failover happens /var/log/messages shows the slave link going down, followed within a second by: kernel: br1: port 1(bond0.8) entering disabled state The thing is /proc/net/bonding/bond0 shows the link is up as expected (simply with only 1 slave instead of 2). If I plug the cable back in it recovers and brings the bridge back to an enabled state. I actually have tested this while a ping is occuring and if the timing is right a packet will actually leave the system after the link is lost, but before the disabled message occurs. This disabled state I assumed was STP, but I have disabled STP on the bridge configuration and this issue still occurs. brctl showstp br1 still shows the link as disabled when it is running without a slave. I also switched between the nics in the server (I have 2x Broadcom & 4x intel). It doesn't matter which configuration I have. Does anyone know of a way to force the bridge to stay enabled or why its detecting the bond as disabled, when it isn't?

    Read the article

  • My Computer hangs for a few minutes just after startup, and then is fine.

    - by EvilChookie
    So I just built myself a reasonably beefy computer, and I installed Windows 7 on it. However, I start the machine up each morning and within a few minutes, the computer will semi hang. That is the mouse is responsive, and most of the time I can open task manager, or a new tab in Chrome. Occasionally windows will be labelled as 'Not responding'. Then, the machine will get over it's problem, and will be nice and quick until I turn it off. Here's my specs: CPU: AMD Phenom-II X4 955 Black (Quad Core, 3.2ghz) RAM: 4GB of DDR3 1300 MOBO: ASUS M4A785T-M (Latest BIOS) HARD DRIVES: 2x1TB Western Digital Caviar Blacks in RAID-0. OS: Windows 7 Ultimate x64. GPU: ASUS GT240 1GB. I believe this issue relates to the RAID array, as I didn't have the lockup problem before I created the array. I purchased a second drive and reformatted after creating a RAID array, since the single drive was a little on the pokey side (compared to the rest of the computer). What I have tried: Updated Raid Drivers Malware checks Windows Updates Unecessary Services CPU and Disk activity appears to be low (via Resource Monitor) No strange errors in the error log. Any thoughts?

    Read the article

  • Issues with VPN functionality

    - by Xorandor
    I've been working on setting up VPN connectivity to our office location. We bought a Cisco WRV210 which have a builtin VPN server. Cisco has some software QuickVPN, which is not as quick and easy as I had thought. I've had mixed experiences on different machines with connecting. Instead I configured an IPSec VPN tunnel following a guide from TheGreenBow here http://www.thegreenbow.com/doc/tgbvpn_cg_linksys_wrv200_en.pdf I followed their instructions and tried out an evaluation of their software, and VPN connection should be working ok. I'm able to do RDP to a machine on the network (using IP address, not machine name) and ping the router etc. What I'm trying to solve are two things: It's not like I'm "really" on the network. Or at least I'm restricted to some degree when going through the VPN. I can't access a machine on the network using machine name, only IP. I can't ping a machine, but the router just fine. Could this be that something is not set up properly? If so, I can ofcourse supply additional information. Second, when I log onto the VPN, I would really like my outgoing connection to go through the internet connection of the remote location. Basically if I connect to the VPN I want my outgoing IP to be that of the remote location's (needed for some IP resctrictions on some of our servers). At a previous work location it worked like this when we connected to our office VPN over PPTP and the builtin windows VPN client. I'm not a huge expert on the topic, so any hints will be appreciated.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Alienware Aurora R2 Slow Boot Up

    - by James R
    I have an Aurora R2 bought a few years ago, and recently I decided a RAM update and new Samsung SSD would be good for speed. So now it's super fast, with the exception of booting up. It still takes good 2 minutes to get past the first splash screen on the BIOS, it's only the BIOS, after that it's like lightning. I've Googled the issue, and the usual problem is the BIOS trying to boot from anything it can, with the fix being to change the boot menu. However I've changed it now, and it's still slow. When I disconnect the USB devices it speeds up, but I can't do that every time I want to boot the PC up! The only other option I can think of is upgrading the BIOS, however it seems that A04 is the recommended on for Aurora R2s, so I don't know if upgrading the BIOS could cause issues, especially not if it doesn't solve the issue. Also, when I disable my original hard drive in the boot menu, the PC won't boot up. Despite the Samsung one being fine to boot from, and the original not being needed as far as I know for starting Windows, it gives me an error message and makes me restart the PC, with a new boot configuration (with the original drive as second choice). Any ideas on how to make the BIOS boot faster? And why I need to have my original drive in the boot menu?

    Read the article

  • issues with Nginx + Passenger Production setup - Loading time/request time delay

    - by Dani Cela
    having a bit of an issue relating to request time. I have NGINX as a proxy server for a ruby on rails app running passenger. I also have a postgresql database server which is running on its own VM separate from my nginx/application server. My issue is that when I try and access my products page which does a lot of database queries, my query takes maybe 3-4 seconds. The second I flood the web server with requests, i will choke out the web server and have requests take almost 20-30 seconds to process. The rails server and database server do not crash, and the usage is not that high. Each server has more than enough memory, even cpu usage on the rails server isn't more than 85%, albeit thats high but its not maxing it out. Is my problem related to my nginx proxy server? I dont really know how to fully explain this so if you have a question please ask it and I can clarify what I mean. EDIT: to see exactly what i mean relating to the database query, see http://207.245.4.215/products

    Read the article

  • Massive Memory Leaks?

    - by Mads
    Hi, I seem to have huge memory leaks, which are confusing me. I'm running fusion 3.1 / Windows 7 on Snow Leopard. It's a clean install with all upgrades applied. I've given fusion 8GB on a 14GB machine. I've installed VS2008 & Eclipse in Windows 7. Nothing unusual. Inside Task Manager in Windows 7, my memory footprint stays reasonable, at <2GB. But in OSX, Activity Monitor shows the footprint of vmware-vmx to be much larger. It starts at 2 GB, which seems fine, but whenever I'm actually doing anything in Windows, vmware-vmx's footprint grows at a few MB per second. After 20 mins or so it's using ~10GB and everything grinds to a halt. Throughout this, Task Manager still says I'm only using 2GB. And whatever I do in windows seems to increase vmware-vmx's memory footprint. Even closing down an application seems to make it go up. So is this par for the course in fusion? I was previously using parallels 3 / Vista under Leopard, and it worked fine. I'd assumed my new fusion config would work better, but this makes it completely unusable. (And apparently I can't even ask tech support unless I buy a support package...) Any advice much appreciated. Thanks

    Read the article

  • Distortion problem with Creative audio equalizer

    - by e-t172
    Hi, I have a problem with the Creative Console EQ, I don't know if it's fixable or not (is the EQ software or hardware on these cards?). Basically, I have enormous distortion with certain sounds in the 30 - 125 hz range. When this happens I get some sort of "frrzzzz" (sorry, I'm french and don't really know the correct english word for that) on top of the original sound. I have a Sound Blaster Audigy SE. I'm using the Daniel_K drivers, on Windows 7 Profesionnal x64. All the effects are disabled except EQ. Steps to reproduce Put the card in 24bit/96khz mode. The problem is also present with 16bit/48khz but seems to be less audible. In the Creative Console, use the following EQ: (full size) Play this sound at a reasonably high volume. You should hear distortion on the two "booms". Especially the second one. Disable Creative EQ. Play the sound in an application with an integrated EQ (e.g. foobar2000, ffdshow) using the same EQ parameters. There is no distortion. Conclusion: the Creative EQ is broken. Is anyone having the same problem? I'm also interested in the results with other Creative cards or even other brands soundcards with a similar EQ feature.

    Read the article

  • Why does running "$ sudo chmod -R 664 . " cause me to get access denied on all affected directories?

    - by Codemonkey
    I have a project folder which has messy permissions on all files. I've had the bad tendency of setting everything to octal permissions 777 because it solved all non security related issues. Then FTP uploads, files created by text editors etc. has their own set of permissions making everything a mess. I've decided to take myself together and start using the permissions the way they were meant to be used. I figured 664 was a good default for all my files and folders, and I'd just remove permissions for others on private files, and add +x for executable files. The second I changed my project folder to 664 however: $ sudo chmod -R 664 . $ ls ls: cannot open directory .: Permission denied Which makes no sense to me. I have read/write permissions, and I'm the owner of the project folder. The leftmost part of ls -l in my project folder looks like this: -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 3 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... -rw-rw-r-- 1 codemonkey codemonkey ... drw-rw-r-- 4 codemonkey codemonkey ... drw-rw-r-- 5 codemonkey codemonkey ... I assume this has something to do with the permissions on the directories, but what?

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    I posted this question on SuperUser but it's hardly getting any views, so I thought I'd ask here as well. In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: [\033k]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title? Thanks!

    Read the article

  • How to correctly set GNU Screen to display currently running program in hardstatus

    - by johnny_bgoode
    In bash, to display the name of the current program in the GNU Screen hardstatus line takes only two configuration lines. First, tell screen what the end of your prompt normally looks like, and supply a default title for a window when you are sitting at in the shell: shelltitle "$ |bash" Next, place this escape sequence in the PS1 variable, before the characters that normally terminate the prompt '$ ' in this case: \033k\033\\ This technique works, to a point. The hardstatus window title is updated to the name of the currently running program, and then switches back to the default title shortly after execution is finished. One major problem, however, is that this escape string is not escaped itself, causing line-wrapping problems with commands longer than the initial line. This was annoying, so I set out looking for a solution. Turns out, simply escaping the previous escape sequence corrects line wrapping: \[\033k\]\[\033\\\] Great! My hardstatus window title still updates to the name of the currently running program, and now my longer commands wrap to the second line correctly. However, with this new escape sequence in my PS1, screen updates the window title to the actual command I am typing, not simply the name of the current program once it is executed. I am wondering, has anyone gotten this working correctly - i.e. line wrapping and proper updating of the hardstatus window title?

    Read the article

  • v2v of RHEL5 box - issues with retaining MAC address

    - by Alex Berry
    For the last week we have been troubleshooting a customer's Red Hat Virtual Machine running on ESXi. We've been using Veeam to try to create a replica off-site and have been having getting it to work on a decent schedule and recently we noticed that there were issues with orphaned snapshots while looking at the datastore. You can see several snapshots in the same folder and it's causing issues with replication and backup, so we decided the cleanest way was to v2v the machine to another datastore so that we had a clean single-vmdk setup to work with, this is where our trouble started. We first started off with a v2v using vmware converter and connecting to the powered on machine as we were having issues doing an offline v2v. This copied fine but when I tried to set a static MAC using this article http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=507 the new VM wouldn't take the address, it simply obtained a new MAC, received a dhcp lease and then would only boot up to a blank red screen, never the login screen. So the next step was to do an offline v2v, once we finally got it working. Same thing, followed the kb to the letter and still it wouldn't take the MAC. I then tried it again and upon completion I compared both old and new VMX file, copying every identifier and variable possible, then unregistered both VMs, uploaded the new VMX file and booted, only to see the same results. Finally I did the same as above but I copied the disk using DD to a second attached vmdk and then attached this to the new VM, and still no luck. After downloading the modified VMX file after the first boot and comparing it to the original I created I found that the bios uuid had changed from the one I typed in manually, so I'm assuming this may be the snagging point, but I have no idea. I've never had this issue before on a P2V and I'm just wondering if someone could shed some light on this, maybe it's to do with RHEL licencing?

    Read the article

  • diskmgmt.msc: Cannot delete volume from USB

    - by Notinlist
    I have an USB drive with about 8GB of size. It has a single partition of size 169MB. Don't know why, I got it that way. I wanted to delete this small (FAT32) partition and create a single NTFS volume on it. First, I noticed that the "Delete volumme" option is disabled (grayed out). I then tried "Change drive letter and paths..." and removed "F:", that way I made sure that there are no open files on it. The "Delete volume" was still disabled. Then I got suspicious, and right clicked on the "Unallocated" area and I noticed that I did not have any useful option. All "New * volume" items are disabled. I exited from diskmgmt.msc, ran a cmd.exe with administrator privileges, ran the diskmgmt.msc from it, same experiences. Why can't i do anything with this disk? I've read some advices about downloading some alternative free software, but I rather not do it if possible. I still hope that Windows 7 Enterprise 64bit alone can reinitialize an USB drive without external help. I also cannot do anything with my other 8GB pendrive. It's all an NTFS volume, I tried to delete it, but the option is disabled here too. Maybe I have some settings somewhere that prevents my from partitioning USB disks. (I have the freedom to remove my D: partition which is the second - not counding the "System reserved" - on my SSD disk.)

    Read the article

  • How to organize deployment process in Chef-controlled environment?

    - by Alex
    I have a web Linux-based infrastructure which consists of 15 virtual machines and over 50 various services. It is fully controlled by Chef. Most of the services are developed internally. Basically the current deployment process is triggered by a shell script. A build system (a mix of Python and shell scripts) packages the services as .deb files and puts these packages into a repo. It runs apt-get update on all 15 nodes then because the standard Chef apt cookbook only runs apt-get once per day and we definitely do not want to run apt-get update unconditionally on each chef-client wake. The build system restarts chef-client daemons on all 15 nodes finally (we need this step because of pull Chef nature). The current process has a number of drawbacks we want to address. First off, it is asynchronous because the deployment script does not check chef-client logs after restart so we don't even know if the deployment was successful. It does not even wait for Chef clients to complete the cycle. Second, we definitely do not want to force chef-client restarts on all nodes because we usually deploy only a small number of packages. And third, I am not quite sure using chef-client for deployment is legitimate, probably we are just doing it wrong from the start. Please share your thoughts/experience.

    Read the article

  • QMail do not delivery to remote mailboxes for my own domain

    - by lorenzo-s
    Sorry for the title, I don't know how to sum up this situation. I have a web server at mydomain.com, running qmail for website related mail delivery (i.e. newsletter, sign up confirmation, etc). qmail here is used only to send mails, because I have a fully working Google App Gmail associated with mydomain.com for normal email receiving. qmail runs fine when sending email to remote addresses, for example to [email protected], but fails when sending to [email protected]. I think it's because the server thinks that he have to manage mailboxes for mydomain.com locally, instead of redirect them to Gmail. Here is the /var/log/qmail/current for two email: the first one is sent without problems to example.com, second one fails because it's for mydomain.com: 2012-11-15 15:04:11.551933500 new msg 262580 2012-11-15 15:04:11.551936500 info msg 262580: bytes 5604 from <[email protected]> qp 5185 uid 33 2012-11-15 15:04:11.575910500 starting delivery 316: msg 262580 to remote [email protected] 2012-11-15 15:04:11.575912500 status: local 0/10 remote 1/20 2012-11-15 15:04:12.189828500 delivery 316: success: 74.125.136.27_accepted_message./Remote_host_said:_250_2.0.0_OK_1352991894_j49si13055539eep.9/ 2012-11-15 15:04:12.189830500 status: local 0/10 remote 0/20 2012-11-15 15:04:12.189831500 end msg 262580 2012-11-15 16:49:20.270332500 new msg 262580 2012-11-15 16:49:20.270336500 info msg 262580: bytes 2192 from <[email protected]> qp 5479 uid 33 2012-11-15 16:49:20.315125500 starting delivery 323: msg 262580 to local [email protected] 2012-11-15 16:49:20.315128500 status: local 1/10 remote 0/20 2012-11-15 16:49:20.320855500 delivery 323: failure: Sorry,_no_mailbox_here_by_that_name._(#5.1.1)/ 2012-11-15 16:49:20.320858500 status: local 0/10 remote 0/20 2012-11-15 16:49:20.372911500 bounce msg 262580 qp 5484 2012-11-15 16:49:20.372914500 end msg 262580 As you can see, it says: Sorry,_no_mailbox_here_by_that_name I can't say he's wrong :) How to solve this? How to let Google App Gmail manage incoming email for mydomain.com for messages sent by mydomain.com qmail server?

    Read the article

  • Desktop PC does not power up on power button

    - by hIpPy
    When I press the power button on my desktop, it does not power up completely. Before I press the power button, I see lights on the motherboard. Everything is normal. On power button press, the fans on the cpu, graphics card and motherboard start to spin a little for a second or two and then they stop. No beeps during this process. It has been doing this for a while now but it used to start up after some trials. Once it starts up, I have NO issues at all like random shutdowns so it is not an issue with OS. Update: I left the desktop off for a few days and it started. I'm just guessing here but it seems as if the PSU (Antec TP2-550ATX) is dying out and does not have enough power now - just a guess. It's an old desktop assembled in 2005 but I have maintained it well. Update: I always keep the desktop running and I never shut it down. During updates or manual restarts, it powers up without issues. I wonder if this sheds lights on the issue. Any idea how I can narrow down the issue? ex: if I can find if the PSU is dying etc. I'd really like to fix the issue. Please help. Thanks. Below is the complete configuration. DFI LAN-Party UT NF4 Ultra-D 6/23 {6.70}, Evercool EC-VC-RE 41/47C, AMD Opteron 170 2.0GHz {1.3.2.16} 1.312V 36/41C, ThermalRight SI-120, Panaflo 120×38mm OCZ Platinum 2×1GB 200MHz 2.66V 3-3-2-7 1T XFX 7800GTX 256MB 475/1250MHz {91.31}, Zalman VF900 Cu led 41/56C WD Caviar 320GB 7200RPM 16MB SATA 3Gb/s Antec TP2-550ATX Antec P180 WinXP sp3 Logitech MX310 Razer Mantis Speed BenQ FP91G+ 19" LCD 8ms DVI Creative Audigy2 ZS {4.42} BenQ DW1640 Logitech z-5300e 5.1 280W Legend: Driver versions: {} User settings: [] Voltage: V Wattage: W Temperature: C (Celsius) min/max

    Read the article

  • Amazon EC2 Nat Instance - goes out but not back in

    - by nocode
    I've followed Amazon's steps and list what I've done. I've created 6 subnets (4 private SN1: 10.50.1.0/24, SN2: 10.50.2.0/24, SN3: 10.50.3.0/24, SN4: 10.50.4.0/24) and 2 public (SN5: 10.50.101.0/24 and SN6: 10.50.102.0/24) -I have a Bastion host and a NAT instance on SN5 and assigned EIP's to both. I created a test instance on SN1. edit: -NAT instance has source/destination check disabled -On the NAT instance, I had enabled the following commands to be bootstrapped: echo 1 > /proc/sys/net/ipv4/ip_forward iptables -t nat -A POSTROUTING -s 10.0.0.0/16 -j MASQUERADE -In my VPC, the private subnets have their own route table and configured 0.0.0.0/0 to the NAT instance with 4 subnets being associated with the route table. I have a second route table for my public subnets and 0.0.0.0/16 is pointed towards the IGW (with the other 2 subnets associated with it). -For Security Groups, I have the NAT instance accepting all traffic on each of the 4 subnets and all OUTBOUND traffic is allowed. For my test server, I have allowed all outbound access and have allowed all traffic from the public subnet of the NAT host. I can ping internally with no issues. On my test instance, if I try to ping google.com, DNS resolves however I don't get a reply back. On my NAT instance, I run a tcpdump and can see the request being requested to google.com but it's not sending the reply back. My NAT host can ping and receive a reply from google. From the test host, when I ping the NAT instance, the tcpdump shows a request and receive. Is there something I'm missing? EDIT: I've figured it out - I had to save the iptable config and restart the service.

    Read the article

  • Windows 7 Busy Icon won't stop spinning

    - by Lynda
    In windows 7 you have the busy icon that is a little blue wheel spinning beside the cursor when it is doing something. It has started to spin for about a second every 3-5 seconds. There is noting running that I can tell on the computer to cause the busy icon. It gets very annoying and I am not sure what to do to get it to stop. Ideas? Running a CPU Usage Gadget says I have less than 5% of the CPU being use and I have over 6GB of RAM free. (I have 8GB total) I am running a 3.6 GHZ AMD FX 4100 processor. EDIT- What I am actually seeing is the working in background pointer (the pointer is still visible beside the busy icon). EDIT 2 - After realizing this happened with the printer installation I turned on the the printer and the busy icon stopped. The printer is setup wirelessly. How do I stop Windows from (I guess) checking for the printer? That is what I assume it is doing. I am using a HP Officejet 8600

    Read the article

  • Windows Photo Viewer can't open this picture because you don't have the correct permissions to access the file location

    - by Software Monkey
    My system in Windows 7 and fully up to date with all patches and options (except for Microsoft Silverlight, which I refuse to install). I get this error whenever I try to open an image using Windows Photo Viewer, such as when previewing from Explorer or when opening an image attachment to an email. I have already verified correct permissions to the file and all folders in the path. The strange thing is that every other program I have seems to open the images fine, including "Slideshow" from Windows Explorer. Even more strange, in WPV there is an "Open" menu that lists the other programs for images including GIMP and MS Paint and they open the very file that WPV is complaining about just fine. That should eliminate permissions as being the problem, especially since (logically at least) they are read/write while WPV is read-only. I have even edited and saved the images that WPV does not open. I am out of ideas, and searching for answer on the Web has resulted only in the same tired repitition of some flavor of "take ownership and reset permissions for the entire drive", which I have already done. And which is counter-indicated by the fact that only Windows Photo Viewer seems to have a problem. The one thing which is slightly unusual is that for normal files they are all on a second HDD mounted into C:, however for email attachments the temporary folder is C:\Temp\, which is directly on that drive.

    Read the article

  • Migrate users from one Active Directory domain to another?

    - by Matt
    I work for a company that hosts desktops for a number of different companies. At the moment, all the clients access a single domain controller called HOSTING. Under that are groups for each company. Each of the hosting servers exist on the same network and so are therefore potentially browseable by other terminal servers. This has raised some security issues and I've found it a little tricky to manage the security. As well, it's possible to see who the other hosted companies are even though other users cannot see their data. What I'd like to do is isolate each clients terminal server/s into their own VLAN. In addition, I'm thinking that each TS would have it's own DC which could just run on the TS for that company. Overhead for a DC is fairly minimal. This would isolate users on that TS from seeing the other companies completely. Firstly, does this sound like a sensible plan? Second... if it is sensible, how would I go about pulling the accounts from the HOSTING domain to a new domain? ideally, without the need for users to change their passwords?

    Read the article

  • Dedicated virtual setup is slow with WordPress

    - by kovshenin
    Hey. I'm running a Fedora linux server on the Amazon EC2 platform. I'm pretty sure there's something wrong with my configuration as it seems to be very slow. SSH sometimes takes over 30 seconds to connect, a WordPress generated web page could take 5 seconds to load, and it could take 20 seconds to load, which is pretty awkward. MySQL queries are all executed in less than a second, so I don't think that's the case. I'm not really sure where the issue lies, but a simple page written in PHP loads instantly. A fresh WordPress installation starts lagging. Same works perfect on grid hosting at MediaTemple for instance, so I'm pretty sure I missed something. If you could please direct me to the right tools and articles which would help me out. Thanks so much! Fedora Core 8, php 5.2.6, MySQL 5.0.45, OpenSSH 4.7p1, OpenSSL 0.9.8b. PHP is configured as a module to Apache 2.2.9, all websites based on virtual hosts. I have some on-going php scripts running from time to time in the background via cron. Thanks.

    Read the article

  • How can I set up a touchscreen as the non-primary monitor in a dual monitor situation?

    - by bazzjedi
    I have been looking for information about using a second monitor, and thought of using a touchscreen. Will a touchscreen monitor work in a dual setup if it is not the main monitor? I haven't found any information on this; I've only found information about the touchscreen being the main monitor. That is, except at this SU question (snippet below): If you mean do the touch screen features still work on the other machine, the answer is yes. Past that, you can even see some of the touch screen features on a non touch screen monitor (just not the multi touch features!) For example, on the taskbar, click (without releasing) on any icon and then drag the mouse up, and you will see that it does the same as using your finger and dragging up. I have a HP 2510 for the main monitor. I'm thinking of adding a HP 2310ti (the touchscreen) as the secondary. My graphics card is a geforce gtx 295. I'm running Windows 7 Professional.

    Read the article

  • Re-installing Windows on an old laptop

    - by Khaled
    I have an old laptop and I want to re-install Windows XP on it. The problem is that this laptop does not have an optical drive. I checked the boot sequence in the BIOS. It does not show an option to boot from USB. It have only two options: Boot from HD. Boot using Realtek agent (network boot). I tried to copy the Windows CD to second drive D:\ and run the installed from there. However, I could not format the C:\ drive. Windows complaints about setup files will be removed or something like that. I tried to boot the laptop using PXE, but I could not. It seems that the DHCP request did not get answered. I thought I could use a USB CD-ROM drive (I don't have one to try), but it might not work as there is no option to boot from USB. Do you think it will work? Do I have other options to try? Any recommendations?

    Read the article

  • PowerShell 3.0 x64 bit broken after installing KB2506143

    - by Dave Parker
    I have searched using all kinds of variations on relevant terms and I cannot find a single other instance of someone else having this excact same problem, so I am hoping someone here may have a clue. Problem I installed Windows Management Framework 3.0 (KB2506143) by downloading and running Windows6.1-KB2506143-x64.msu from Microsoft.com. Once completed I rebooted my machine as requested. After rebooting and logging in, I try to run the 64-bit PowerShell command shell and it comes up for a second then goes away. The 32-bit shell seems to work fine, it is just the 64-bit one that fails. Looking in the Fusion logs, I found: *** Assembly Binder Log Entry (10/4/2012 @ 1:51:48 PM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework64\v2.0.50727\mscorwks.dll Running under executable C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe --- A detailed error log follows. === Pre-bind state information === LOG: User = ********\***** LOG: DisplayName = Microsoft.PowerShell.ConsoleHost, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL <remainder omitted> GacUtil reveals that there is a Microsoft.PowerShell.ConsoleHost, Version=1.0.0.0, but not 3.0.0.0. I tried uninstalling KB2506143 (which removed MSVCRT90.dll and caused Windows Live Messenger to fail on load after rebooting again, so I ran a repair in stall on Windows Live Essentials and that fixed the Messenger problem) and then re-installing it, but nothing changed. If it helps, here are what I think may be the relevant parts of my hardware/software environment. Environment Dell Latitude E6510, 8GB RAM Windows 7 Professional 64-bit with SP1 Visual Studio 2010 Professional installed (includes .NET 4.0) Visual Studio 2012 Professional installed Microsoft Forefront Client Security Any clues out there? Thanks, Dave

    Read the article

  • Transparent proxying leaves sockets with SYN_RCVD in MacOS X 10.6 Snow Leopard (and maybe FreeBSD)

    - by apenwarr
    I'm trying to create a transparent proxy on my MacOS machine in order to port the sshuttle ssh-based transproxy VPN from Linux. I think I almost have it working, but sadly, almost is not 100%. Short version is this. In one window, start something that listens on port 12300: $ while :; do nc -l 12300; done Now enable proxying: # sysctl -w net.inet.ip.forwarding=1 # sysctl -w net.inet.ip.fw.enable=1 # ipfw add 1000 fwd 127.0.0.1,12300 log tcp from any to any And now test it out: $ telnet localhost 9999 # any port number will do # this works; type stuff and you'll see it in the nc window $ telnet google.com 80 # any host/port will do # this *doesn't* work! After the latter experiment, I see lines like this in netstat: $ netstat -tn | grep ^tcp4 tcp4 0 0 66.249.91.104.80 192.168.1.130.61072 SYN_RCVD tcp4 0 0 192.168.1.130.61072 66.249.91.104.80 SYN_SENT The second socket belongs to my telnet program; the first is more suspicious. SYN_RCVD implies that my SYN packet was correctly captured by the firewall and taken in by the kernel, but apparently the SYNACK was never sent back to telnet, because it's still in SYN_SENT. On the other hand, if I kill the nc server, I get this: $ telnet google.com 80 Trying 66.249.81.104... telnet: connect to address 66.249.81.104: Connection refused telnet: Unable to connect to remote host ...which is as expected: my proxy server isn't running, so ipfw redirects my connection to port 12300, which has nobody listening on it, ie. connection refused. My uname says this: $ uname -a Darwin mean.local 10.2.0 Darwin Kernel Version 10.2.0: Tue Nov 3 10:37:10 PST 2009; root:xnu-1486.2.11~1/RELEASE_I386 i386 Does anybody see any different results? (I'm especially interested in Snow Leopard vs Leopard results, as there seem to be some internet rumours that transproxy is broken in Snow Leopard version) Any advice for how to fix?

    Read the article

< Previous Page | 460 461 462 463 464 465 466 467 468 469 470 471  | Next Page >