Search Results

Search found 5057 results on 203 pages for 'force'.

Page 146/203 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • I used disk copy to clone my drive, now my windows 7 profile won't load correctly

    - by RzK
    I used easeuse disk copy, after acronis, clonezilla, windows image restore failed me. Basically it copys all sectors, I set it to skip bad sectors(40). The source drive works, it just gave me a couple errors and stopped booting at one point. The drive is an identical copy, minus 40 bad errors. The drive is set to C and active partition, I rebuilt the boot order. I've ran sfc /scannow and ran chkdsk /r chkdsk found 20kb of bad sectors if I remember right. Now the issue I get is when I log into my profile which was saved right, I get a blank light blue wallpaper (non-license) explorer.exe is not running, and there are only 4 processes running in taskmanager, including taskmanager. I would try a repair install but CRTL-E would not open, nothing will open once I force start explorer.exe, almost like all services are down. What should I do? Fresh install is almost not a possibility I will try and fix this issue. sfc /scannow /offbootdir=c:\ /offwindir=c:\windows returns "Windows Resource Protection could not perform the requested operation"

    Read the article

  • Skip all warning prompts on ACPI shutdown?

    - by N Rahl
    When I issue an ACPI shutdown command to a Windows XP guest machine from the host VM server, I want Windows to shutdown. The problem is, Windows always wants to ask some question or another, rather than just shutting down. I need shutdown to be reliable, no matter what is running or going on, so I can automate shutdowns from the host machine. But I want it to be as graceful as possible, rather than just pulling the plug. Some problems: If a user is logged in, ACPI shutdown causes a box to appear that says, "are you sure you want to shutdown while other users are logged in"? And this prevents shutdown until someone connects to the machine and clicks "yes". In this case, it should try its best to gracefully log out all users, using force if necessary, and then shutdown without promoting. Busy or non-responding programs or programs asking to save data can prevent Windows from shutting down until a user answers a prompt. This should attempt to save data, wait maybe 30 seconds for non-responding programs, but should get aggressive with stubborn programs. "nope, time's up! 3,2,1, Goodbye!" Is there a registry setting that I can change from: ACPI_Shutdown: "Shut down if Windows feels like it" to ACPI_Shutdown: "Just do it. Kill programs, bump users, try to be graceful about it, but when I come back, I expect you to be off." This should respond to the ACPI shutdown command, and not be a script on windows, unless that script is triggered by the ACPI power button. I'm hoping this can be changed with registry options.

    Read the article

  • Get Python to raise MemoryError instead of eating all my disk space

    - by asmeurer
    If I run a Python program with a memory leak, I would normally expect the program to eventually die with MemoryError. But instead, what happens is that all the virtual memory is used until my disk runs out of space. I am running Mac OS X 10.8 on a retina MacBook Pro. My computer generally has between 10GB to 20GB free. Mac OS X is smart enough to not die completely when the disk runs out of space (rather, it gives me a dialog letting me force quit my GUI programs). Is there a way to make Python just die when it runs out of real memory, or some reasonable amount of virtual memory? This is what happens on Linux, as far as I can tell. I guess Mac OS X is more generous than Linux with virtual memory (the fact that I have an SSD might be part of this; I don't know just how smart OS X is with this stuff). Maybe there's a way to tell the Mac OS X kernel to never use so much virtual memory that leaves less than, say, 5 GB free on the hard drive?

    Read the article

  • Start daemon after specific samba share is mounted

    - by getack
    I axed this question on AskUbuntu, but it's not getting any traction from there... So I'll try here as well: I have a homebrew headless NAS running 12.04. In it I have a bunch of disks that are presented as a samba share thanks to Greyhole. If I want to do anything to the files within this share, I must do it through greyhole so that everything is updated properly. Thus, the share must be mounted locally and then accessed from there if I want to work on the files from the local machine. I do this mounting automatically thanks to these instructions. I also have Deluge installed that takes care of all my torrenting needs. Deluge's default download location is in this share, so that all the downloads are immediately available to the rest of the network. Obviously for everything to work, the share must be mounted, otherwise Deluge is going to have a problem downloading to it. The problem is, it seems like Deluge is starting before the shares are mounted when the system boots. So downloading/seeding does not continue automatically after boot. I have to log in and force a manual rescan and start on each torrent otherwise all the torrents just hangs on the error. Is there a way I can make deluge start after the shares got properly mounted? I looked into Upstart's emits functionality but I cannot seem to get it to work properly. Any advice?

    Read the article

  • vsftp login errors 530 login incorrect

    - by mcktimo
    Using Ubuntu 10.04 on an aws ec2 instance. I was happy just using ssh but then a wordpress plugin needs ftp access...I just need ftp access for one site www.sitebuilt.net which is in /home/sitebuil. I installed a vftpd and pam and followed suggestions that got me to the following state /etc/vftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES xferlog_file=/var/log/vsftpd.log secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd rsa_cert_file=/etc/ssl/private/vsftpd.pem guest_enable=YES user_sub_token=$USER local_root=/home/$USER chroot_local_user=YES hide_ids=YES check_shell=NO userlist_file=/etc/vsftpd_users /etc/pam.d/vsftpd # Standard behaviour for ftpd(8). auth required pam_listfile.so item=user sense=deny file=/etc/ftpusers onerr=succeed # Note: vsftpd handles anonymous logins on its own. Do not enable pam_ftp.so. # Standard pam includes @include common-account @include common-session @include common-auth auth required pam_shells.so # Customized login using htpasswd file auth required pam_pwdfile.so pwdfile /etc/vsftpd/passwd account required pam_permit.so session optional pam_keyinit.so force revoke auth include system-auth account include system-auth session include system-auth session required pam_loginuid.so /etc/vsftpd_users sitebuil tim /etc/passwd ... sitebuil:x:1002:100:sitebuilt systems:/home/sitebuil:/bin/sh ftp:x:108:113:ftp daemon,,,:/srv/ftp:/sbin/nologin /etc/vsftpd/passwd sitebuil:Kzencryptedpwd /var/log/vftpd.log Wed Feb 29 15:15:48 2012 [pid 20084] CONNECT: Client "98.217.196.12" Wed Feb 29 15:16:02 2012 [pid 20083] [sitebuil] FAIL LOGIN: Client "98.217.196.12" Wed Feb 29 16:12:33 2012 [pid 20652] CONNECT: Client "98.217.196.12" Wed Feb 29 16:12:45 2012 [pid 20651] [sitebuil] FAIL LOGIN: Client "98.217.196.12"

    Read the article

  • How to combine try_files and sendfile on Nginx?

    - by hcalves
    I need Nginx to serve a file relative from document root if it exists, then fallback to an upstream server if it doesn't. This can be accomplished with something like: server { listen 80; server_name localhost; location / { root /var/www/nginx/; try_files $uri @my_upstream; } location @my_upstream { internal; proxy_pass http://127.0.0.1:8000; } } Fair enough. The problem is, my upstream is not serving the contents of URI directly, but instead, returning X-Accel-Redirect with a location relative to document root (it generates this file on-the-fly): % curl -I http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg HTTP/1.0 200 OK Date: Mon, 26 Nov 2012 20:58:25 GMT Server: WSGIServer/0.1 Python/2.7.2 X-Accel-Redirect: animals/kitten.jpg__100x100__crop.jpg Content-Type: text/html; charset=utf-8 Apparently, this should work. The problem though is that Nginx tries to serve this file from some internal default document root instead of using the one specified in the location block: 2012/11/26 18:44:55 [error] 824#0: *54 open() "/usr/local/Cellar/nginx/1.2.4/htmlanimals/kitten.jpg__100x100__crop.jpg" failed (2: No such file or directory), client: 127.0.0.1, server: localhost, request: "GET /animals/kitten.jpg__100x100__crop.jpg HTTP/1.1", upstream: "http://127.0.0.1:8000/animals/kitten.jpg__100x100__crop.jpg", host: "127.0.0.1:80" How do I force Nginx to serve the file relative to the right document root? According to XSendfile documentation the returned path should be relative, so my upstream is doing the right thing.

    Read the article

  • Is there a way to "burn" audio to an ISO? (as an audio CD)

    - by Sootah
    I have an audiobook that I've downloaded via their download manager, and it's loaded into their cutesy little audio program that they force you to use. I can play the book just fine using their proprietary software, and while it's annoying when using my PC, it's utterly UNBEARABLE when I try to listen to it on my Blackberry. The program is INSANELY slow, it literally takes around 30 seconds to switch between tracks, so if I've forgotten where I am in the book it takes me around 15 minutes to finally get to where I was at. I've looked everywhere on how to transcode the book to .MP3, but evidently with their current format it's either extremely convoluted (and I have no desire to dick around with installing some older version of the codec, getting a different transcoding app, and then wrestling with getting it to actually work). Since I'm able to burn a copy of the book to an audio CD, I figure the best way to go about this is to just make the CDs and then rip them off of those to .MP3. In order to avoid wasting two hours, not to mention 14 CD-R's, I was wondering if there's a way to "burn" to an .ISO instead of an actual CD-R. I currently have SlySoft's Virtual CloneDrive installed, so I can mount .ISO's easily enough, but now I want to actually create an ISO via the CD burning process. Just in case I've not explained myself very well, here is an overview of what I intend to do: "Burn" a set of Audio CD .ISOs from the audiobook (hopefully I can do this using Windows Media Player, otherwise I'll be forced to use the audiobook app) Mount an .ISO in Virtual CloneDrive Rip the audio tracks on the mounted .ISO to .MP3s Repeat steps 2-3 until the entire book is in .MP3 format Copy .MP3s to my Blackberry so that I'm not driven insane every time I want to listen to the book in the car, and be able to use Winamp when listening on my computer EDIT: I'd suppose a rather concise way to put it is that I need something that will emulate a CD-R drive, so that you can select it as the output drive in whatever app your burning the audio CD from. (I'd suppose that when you "insert a blank CD-R" the app would then ask you what file to save to)

    Read the article

  • Attempts at NIC teaming on Server 2008 R2 with PRO/1000 MT

    - by Klaus
    I have a Dell PowerEdge 1850 server and a gigabit switch that supports nic teaming (and was configured to do so). The server has a total of four Intel PRO/1000 MT ports, which also support teaming. But.. for some reason Intel does not actually have a version of the drivers/ProSet that will work for these cards on 2008 R2. You have to use the built-in drivers that come with 2008 R2, which do not support the additional features. According to their website, they have no plans to change this. Strangely enough, I experimented with various drivers in an attempt to force it to work. At one point, the teaming was working, but there were side effects (such as the DNS server refusing to start). So now I am back to running just one of the cards, (very) frustrated about the whole situation. I have looked all over to see if there is some way around this, but have not had any success. I know I can probably just get a new network adapter for it, but with the good deal I got, that would cost more than the server! :) While staying with 2008 R2, does anyone know of any possible alternatives? Thanks!

    Read the article

  • Time Machine doesn't back up some folders/files (that it should)

    - by Eric
    MacBook Pro 17" (Snow Leopard) -- WD 2TB external drive MacBook Pro 13" (Snow Leopard) -- Seagate 1TB external drive I find that Time Machine sometimes doesn't back up new folders (and the files in them). This occurs both when I choose "Back Up Now" from the Time Machine icon in the Menu Bar and in TM's scheduled backups. These are not excluded folders (nor are then in the TM do-not-back-up list); they're perfectly normal folders (at various locations) inside my home folder. The only way to force them to be backed up is to restart the computer (unmounting & mounting the TM external disk does not help). There seems to be a correlation with new folders (i.e., it's more likely to happen that an entire new folder is not backed up), but this may just be observer bias (because those are the folders that I go check to see if they've been backed up). It's not computer dependent (it happens on two different computers). It's not external disk dependent (it happens on two different external disks). It's not time dependent (not restarting for several days does not fix the problem). What does a restart change that these other events don't? I'm considering deleting the /.fseventsd folder (without restarting the computer) to see if that helps. I haven't tried logging out and logging in (without restarting the computer).

    Read the article

  • Hostname error on my Slicehost Ubuntu server

    - by allesklar
    Like many folks who upgraded to Rails 2.2, I got an exception raised when sending an email. This version of Rails or later does require using tls for sending emails. The message in the production log file says: hostname was not match with the server certificate I did a whole lot of research and work on this and did everything I could. I changed my slice's hostname to ohlalaweb.com. If I run the command 'hostname' at the CL I get: ohlalaweb.com Postfix seems to work fine. I can send emails from the CL to my gmail, yahoo, and google apps gmail accounts with no problems. Here is the result of cat /etc/postfix/main.cf # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. myorigin = /etc/mailname smmtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no # TLS parameters smtpd_tls_cert_file=/etc/ssl/certs/ohlalaweb.pem smtpd_tls_key_file=/etc/ssl/certs/ohlalaweb.pem smtpd_use_tls=yes # SA created next line to force postfix to use self create certificate smtpd_tls_auth_only=yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache # See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for # information on enabling SSL in the smtp client. myhostname = ohlalaweb.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases mydestination = localhost.localdomain, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I have regenerated the ssl keys with the ohlalaweb.com host name. Any ideas or suggestions?

    Read the article

  • EC2 Configuration

    - by user123683
    I am trying to create a server structure for my EC2 account. The design I have chosen consists of 2 instances running in different availability zones, elastic load balancer, an auto-scaling group with cloudwatch monitoring configured and a security group defining rules for access to the instances. This setup is to support an online web application written in PHP. I am trying to decide what is a better policy: Store MySQL DB on a separate Instance Store MySQL DB on an attached EBS volume (from what i know auto-scaling will not replicate the attached EBS volume but will generate new instances from a chosen AMI - is this view correct?) Regards the AMI I plan to use a basic Amazon linux 64 bit AMI, and install bastille (maybe OSSEC) but I am looking to also use an encrypted file system. Are there any issues using an encrypted file system and communication between the DB and webapp i neeed to be aware of? Are there any comms issues using the encrypted filesystem on the instance housing the webapp I was going to launch a second instance or attach a second volume in the second availability zone to act as a standby for the database - I'm just looking for some suggestions about how to get the two DB's to talk - will this be a big task Regards updates for security is it best to create a recent snapshot and just relaunch and allow Amazon to install updates on launch or is the yum update mechanism a suitable alternative - is it better practice to relaunch instead of updates being installed which force a restart. I plan to create two AMI snapshots one for the app server and one for the DB each with the same security measures in place - is this a reasonable - I just figure it is a better policy than having additional applications that are unnecessary included in a AMI that I intend on using. My plan for backup is to create periodic snapshots of the webapp and DB instances (if I use an additional EBS volume instead of separate instances my understanding is that the EBS volume will persist in S3 storage in the event of an unexpected termination and I can create snapshots of the volume backup purposes). Thanks in advance for suggestions and advice. I am new to EC2 and I may have described unnecessary overkill but I want to try implement what can be considered a best practice solution so all advice is appreciated.

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • Dual VGA monitors with Quadro FX 580?

    - by dentrasi
    I have a machine with a Quadro FX 580 card (DVI and two Displayport). Attached to it are two 19" Acer screens, which are both (annoyingly) VGA. The first one works perfectly, with a DVI-VGA adaptor. The second one doesn't work. It's got a VGA cable, which goes into a VGA-DVI converted, which then goes into a DVI-Displayport converter. Initally, I was getting 'Cable Unplugged' on the screen, and it couldn't be seen by Windows or the nVidia control panel. After swapping the VGA-DVI adaptor (which works perfectly on another machine), Windows can now see the monitor. The nVidia panel sees the model and native resolution, but I get a constant 'No Signal' error. Switching to the other Displayport makes no difference. I suspect that the card is seeing a DVI connection plugged into it (nVididaCP shows the monitor as having a DVI connection), and it only sending out a digital signal because of this. Does anyone know of a solution (other than trying to get a Displayport-VGA adaptor), or of a way to force the card to see it as VGA? Thanks, ~Dentrasi

    Read the article

  • How do I install the main repositories for RHEL6

    - by eisaacson
    We've setup RHEL6 on a new server. As far as we can tell, our subscription is all setup properly. However, when I run yum repolist, it doesn't show any repositories. /etc/yum.repos.d/redhat.repo is empty. I tried pasting in the content from another RHEL6 server's redhat.repo but as soon as I run yum, it wipes it out again. I just need to get the basic RedHat repositories setup so I can install packages. EDIT: Using the GUI, I went to System Administration Red Hat Subscription Manager. Under the 'Products' tab, it did not show any products. EDIT: When I run yum update, here's what I get: # yum update Loaded plugins: product-id, refresh-packagekit, security, subscription-manager This system is receiving updates from Red Hat Subscription Management. Setting up Update Process No Packages marked for Update When I log in to RedHat customer portal, it shows that subscription as active. EDIT: To make sure I wasn't having a subscription issue. I re-registered and re-subscribed. I get all the same results. # subscription-manager register --force # subscription-manager subscribe --pool=*redacted* EDIT: contents of /etc/yum.conf [main] cachedir=/var/cache/yum/$basearch/$releasever keepcache=0 debuglevel=2 logfile=/var/log/yum.log exactarch=1 obsoletes=1 gpgcheck=1 plugins=1 installonly_limit=3 contents of /etc/yum/pluginconf.d/rhnplugin.conf: [main] enabled = 0 gpgcheck = 1

    Read the article

  • Asus n61Ja notebook bios update

    - by zKs
    I wanted to update the bios, with an official bios update, from version 207 to 211. I didn't use winflash, I used easyflash in the bios. Everything seemed to be going okay; it deleted the old files, wrote the new ones, verified the new ones. Then it said: shutdown in/after 2 seconds and it shut down. Then nothing happened anymore. Power button completely unresponsive. The battery light was still on, and I'm not sure if I should've just waited... I didn't though, I thought I had to remove the battery, take the power off completely to be able to start it up again. So I'm wondering: what are my options here? My warranty expired and I don't really have the money to send it in and pay hundreds of bucks on repairs. Is there anything I can try? CMOS battery reset? Anything??? Please help me out! I would be very grateful :) ps. What was sort of odd btw, was that easyflash said something like that it was an unsigned bios and if I wanted to flash it anymore, if i wanted to 'force' it or something. It was with 100% certainty the correct update from the Asus.com support site, so I didn't take that 'warning' seriously.

    Read the article

  • Yum installing wrong MySQL version on CentOS 5 (Amazon)

    - by Marius Stuparu
    I'm having trouble with a CentOS server running on AWS. This is CentOS 5.6 i386 from RightImage, but the problem was the same on all RightScale AMIs. When issuing the following command: yum install mysql mysql-server mysql-devel the only packages proposed by yum are MySQL-devel-community and MySQL-server-community. Which would't be a problem, except this package is old/incomplete, because it does not create a "mysqld" service, only a /etc/init.d/mysql (notice the missing d). That would't be a problem, I can start the service by doing ./etc/init.d/mysql start, and it starts OK, but there is no "mysql" (or other mysql*) command available. If I try to force a different version (yum install mysql50-server...) I get this yum error: mysql-5.0.77-4.el5_6.6.i386 from updates has depsolving problems --> mysql conflicts with MySQL-server-community (even when I don't have MySQL-server-community installed). I have tried this before and after yum update, in a fresh image. How can I install a working version of MySQL? I'm stuck on CentOS 5 because I want to install Kloxo (which does not yet support CentOS 6). I'm not interested in Webmin, and I can't afford cPanel. Thanks!

    Read the article

  • QuickTime Player sounds much better than iTunes

    - by Gene Goykhman
    I am playing a 320 kpbs encoded music MP3 in iTunes and the sound is substantially worse than the exact same file played back in QuickTime Player (Max OS X 10.8.5). I have maxed out system volume and iTunes playback volume. I have disabled all the audio processing features in iTunes (equalization, sound enhancer, etc.) The audio coming from iTunes still sounds resampled and/or processed, whereas QuickTime Player appears to be playing it "as is". Even when I Get Info on the MP3 file in Finder and play it back directly from the Get Info window it sounds good. It's just iTunes that seems to be mangling the song. I can notice a difference on virtually all my music, so it's not just one particular MP3. I suspect the issue is that iTunes is doing some kind of audio processing but I can't find a way to turn it off. This is the newest iTunes (11.1), but the problem has probably been going on for a while... I just switched to decent earbuds and started noticing the difference. What's the best way to force iTunes to play back the file as-is, or as close as possible to how QuickTime Player/Finder would play it?

    Read the article

  • Samba Share - MS Excel when saved (can't access the file, there are several possible reasons)

    - by brain90
    Dear Fellow ServerFaulter, I have a weird problem in my samba share. I have one share definition for 3 client (A,B,C) This share contain some excel file which having a lot of formula and linked each other. Client A access the file with libre office (ubuntu), client B access with WinXP & MS Office 2003, The write and read process working successfuly on Both of them. The problem occur when client C accessing the same file with MS Excel 2003 (windows xp). This messagebox appear when he saving the file : Microsoft office excel cannot access the \\192.168.1.23\myshare\ There are several possible reasons: - The File ort path does not exist The file is being used by another program. - The workbook you are trying to save has the same name as a - Currently open workbooks. I was trying http://support.microsoft.com/kb/291204 but it didnt work. Below is my share definition : [brainshare] comment = brainshare path = /opt/brainshare/ valid users = @brainshare force group = brainshare read only = No create mask = 0775 veto files = /*.scr/*.eml/thumbs.com/ Help me please... Thanks in advance ! Server: Ubuntu 10.10, Samba version 3.5.4

    Read the article

  • Change the default route without affecting existing TCP connections

    - by Patrick Horn
    Let's say I have two public network addresses on my server: one NAT through an ISP (192.168.99.0/24), and a VPN through a different ISP (192.168.1.0/24), already configured with a per-host route to the VPN server through my ISP. Here is my initial routing table. I am currently routing through my ISP on subnet 192.168.99.0/24. $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.99.1 0.0.0.0 UG 0 0 0 eth1 55.66.77.88 192.168.99.1 255.255.255.255 UGH 0 0 0 eth1 192.168.99.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Now, I want new TCP connections to switch to my 192.168.1.0/24 so I type the following: $ route add -net 0.0.0.0 gw 192.168.1.1 dev tap0 When I do this, it causes some long-standing TCP connections to hang. Is there a way to I safely change the default interface for new connections, while allowing existing TCP connections to use the old route (i.e. do I need enable some sort of stateful routing table)? I am okay with a solution that only works with established TCP connections, and I don't care how hacky it is. For example, if there is a way to add temporary iptables rules for existing connections to force them over the old route. But there has to be some way to do this. EDIT: Just a note about a simple "route add -host ... " for existing connections: this solution would work if I am fine with leaving a subset of IPs on the old interface. However, in my application, this actually doesn't solve my problem because I want to allow new connections to come on the new interface even if they have the same source IP. I'm now looking at using the "ip route" command to set source-based routing rules.

    Read the article

  • Macbook Pro Triple Boot OS X Lion, Windows 7 and Windows 8

    - by Lloyd Sparkes
    MacBook Pro (Summer 2010 Model, Basic Model) I currently have OS X Lion and Windows 7 running side by side on my MacBook Pro. However I have a need to get Windows 8 running as well in this mix (a Virtual Machine is not good enough, I need the performance). I have created a suitably sized parition (80GB) that is recognizable in Boot Camp. However every time I try to boot from the USB stick (that worked to install Windows 8 on my PC) using the latest version of rEFIt, it just boots Windows 7 and not the Windows 8 installer. I cannot start the installation within Windows 7 as it will just install over Windows 7. I'm guessing the Boot Camp emulation is doing something werid to stop the "Press any key to install Windows..." message from appearing (which should happen if the installer detects Windows is already installed (e.g. if you left your install disk in). Is there a way to get around this / force the installer to start? (Note I cannot start the Windows 7 installer either if I wanted to install a second copy of Windows 7 to upgrade to Windows 8)

    Read the article

  • My EliteBook is not auto picking 1080p for external monitor, poor display on forcing

    - by Griever
    I'm connecting my Samsung LED S22A300B to my HP EliteBook 6930p through VGA out. Laptop has Intel 4500MHD video card. I have latest drivers installed for both card and monitor. Only 800x600 and 1024x768 are shown. A lot of other people get this problem when they use it with docking station as discussed here. But I am not using any docking station. The monitor works great with my desktop though. As advised on the aforementioned page, one of the things I tried was to force the resolution using Intel's "custom resolution" feature. I installed PowerStrip on my desktop and copied advanced timing values(front/back porch,sync width, etc.) from there and then used the same values while defining a custom resolution in my laptop's Intel graphics utility. As a result, I got the 1080p resolution but the display is poor. Text has some weird colored shadow and sometimes on images too. What should I do?

    Read the article

  • Keep-Alive header not sent from Tomcat 5.5 http connector?

    - by Codek
    We're currently using a hardware load balancer, which then goes to Apache and that then goes to Tomcat 5.5 via the AJP connector. We've decided to dump apache for various reasons - In our current system it doesnt provide any advantage. However when I look at the headers sent when we do this, the "Keep-Alive: timeout=15 max=96" header doesnt get sent when you use the tomcat http connector Interestingly, i can find no documentiation on "keepalivetimeout" for tomcat5.5, but i can for tomcat6. But neither can i find evidence that tomcat5.5 doesnt support this setting. here's my connector: <Connector port="8090" maxHttpHeaderSize="8192" maxThreads="400" minSpareThreads="150" maxSpareThreads="300" enableLookups="false" connectionTimeout="2" maxKeepAliveRequests="400" disableUploadTimeout="true" /> So; Is there any way I can specify the keepalive timeout if we use the http connector with tomcat 5.5, and force this header entry to be sent? Just to be clear - the exact header entry i see back from the server is this with apache: Keep-Alive: timeout=2, max=100 But nothing from tomcat/coyote. I've looked at this some more, and I dont think the Keep-Alive header entry really matters. The problem seems to be that keep-alives are simply not supported in tomcat 5.5 http connector? They do seem to work in tomcat6 (+java 6). Thanks, Dan

    Read the article

  • Toshiba laptop cd drive read causes OS to totally freeze

    - by Fujishiro
    Okay I'll try to write an understandable summary. Forgive me if I'll fail with that attempt though. So. There is a Toshiba Satellite notebook. Got Windows 7 x86 Professional (OEM) installed on it, everything is fine (okay.. somewhat). The problem. If you put an audio or any kind of disc into the drive, something starts to eat the PC. Back then when the owner told me about this, he put an audio disc into the lappy. Winamp caused the IO load, 100%. Tried taskkill, taskkill /T, tried powershell, EVERYTHING. You just can NOT kill winamp or anything which becomes the blocker at that time. Even if you kill almost everything, laptop won't do a clear shutdown. Also I tried to use the force switch at 'shutdown' from cmd, but no use. (So: At these times you can use the laptop, but the blocker/explorer/disc becomes gray as a non-responding app. You can try to kill them, but that won't work, nor you can shutdown the machine). (Also tried using PID, but no use. For the highest IO I used the "select columns" from Task Manager and enabled the IO columns.) My first hunch was the problematic disc, autoplay and it tries to read tries to read (still shouldn't kill the PC). Disabled autoplay, removed winamp. Tried other software, etc. Everything was ok. Few days later the owner tried to put a disc into the machine and it started to reproduce the same symptoms but with a totally different disc. Uhm what to know. Virus is not an option, protected by BitDefender (valid license) and Spybot. Thanks if you have ANY idea about this strange problem. ps.: For now, the owner uses Daemon tools + Blindwrite as an alternative for those apps which wouldnt start without the disc.

    Read the article

  • Split Tunnel VPN using incorrect Tunnel

    - by Brian Schmeltz
    Our company has a handful of field offices that have recently been setup with a regular internet connection after we removed the T1 and router that connected them directly to our network. Now, when the users are in the office, they log in to the VPN to be able to connect to the network. For the sake of them being able to print and scan from the local multi-function we have setup a split tunnel VPN. We currently have about 15-20 users using this setup around the country without any problems. Recently one of our users started having problems accessing internal programs/sites when connecting from both home and the office. There are three other users in the same office and they do not have this problem. I assumed that it was something with the computer and went ahead and replaced it with another of the same model. The computer worked fine in our home office; however, when the user received it, she had the exact same problem both at home and in the field office. Thinking it may be a NIC driver issue I sent her another computer, this time a different model, same problem occurred. If I update the host file to point to the correct paths, things will work, and if I connect via a normal VPN connection everything works, but the user cannot scan or print - which is a problem. Have tried to find ways to create another tunnel on a normal VPN and have tried to find ways to force the correct tunnel on the split tunnel VPN. It appears that there is something related to the ISP because if I connect to Comcast or Verizon it is fine but once she connects to Insite then she has problems. I have been unable to get any support from Insite as they don't feel the issue is with them. We use a Nortel VPN client. Any thoughts or ideas would be appreciated.

    Read the article

  • How to change controller numbering/enumeration in Solaris 10?

    - by Jim
    After moving a Solaris 10 server to a new machine, the rpool disk is now c1t0d0. We have some third party applications hard coded for c0t0d0. How can I change the controller enumeration on this machine? There is no longer a c0. I've tried rebuilding the /etc/path_to_inst, but the instance numbers don't seem to match up with the controller numbers. Also, it's not clear if i86pc platforms use this file. I've tried devfsadm -C to clear the dangling links, but I'm not sure how to cause devfsadm to start numbering from 0 again (or force certain devices in the tree to a specific controller number). Next I am going to try to create the symlinks manually in /dev/dsk and rdsk to point to the correct /devices. I feel like I am going way off path here. Any suggestions? Thanks Update: This is on virtual ESXi hardware with an additional pass-through HBA. There is no controller 0 on the machine, that is for sure. devfsadm -C cleans up all the c0 device symlinks but keeps the already linked controllers at their current ids.

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >