Search Results

Search found 6981 results on 280 pages for 'force flow'.

Page 212/280 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • Easiest way to allow direct HTTPS connection in Intercept mode?

    - by Nicolo
    I know the SSL issue has been beaten to death I'm using DNS redirect to force my clients to use my intercept proxy. As we all know, intercepting HTTPS connection is not possible unless I provide a fake certificate. What I want to achieve here is to allow all HTTPS requests connect directly to the source server, thus bypassing Squid: HTTP connection Proxy by Squid HTTPS connection Bypass Squid and connect directly I spent the past few days goolging and trying different methods but none worked so far. I read about SSL tunneling using the CONNECT method but couldn't find any more information on it. I tried a similar method in using RINETD to forward all traffic going through port 443 of my Squid back to the original IP of www.pandora.com. Unfortunately, I did not realize all other HTTPS requests are also forwarded to the IP of www.pandora.com. For example, https://www.gmail.com also takes me to https://www.pandora.com Since I'm running the Intercept mode, the forwarding needs to be dynamic and match each HTTPS domain name with proper original IP. Can this be done in Squid or iptables? Lastly, I'm directing traffic to my Squid server using DNS zone redirect. For example, a client requests www.google.com, my DNS server directs that request to my Squid IP, then my transparent Squid will proxy that request. Will this set up affect what I'm trying to achieve? I tried many methods but couldn't get it to work. Any takes on how to do this?

    Read the article

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • Use external display from boot on Samsung laptop

    - by OhMrBigshot
    I have a Samsung RV511 laptop, and recently my screen broke. I connected an external screen and it works fine, but only after Windows starts. I want to be able to use the external screen right from boot, in order to set the BIOS to boot from DVD, and to then install a different OS and also format the hard drive. Right now I can only use the screen when Windows loads. What I've tried: I've tried opening up the laptop and disconnecting the display to make it only find the external and use the VGA as default -- didn't work. I've tried using the Fn+key combo in BIOS to connect external display - nothing I've been looking around for ways to change boot sequence without entering BIOS, but it doesn't look like it's possible. Possible solutions? A way to change boot sequence without entering BIOS? Someone with the same brand/similar model to help me blindly keystroke the correct arrows/F5/F6 buttons while in BIOS mode to change boot sequence? A way to force the external display to work from boot, through modifying the internal connections (I have no problem taking the laptop apart if needed, please no soldering though), through BIOS or program? Also, if I change boot sequence without accessing external screen, would the Ubuntu 12.1 installation sequence attempt to use the external screen or would I only be able to use it after Linux is installed and running? I'd really appreciate help, I can't afford to fix the screen for a few months from now, and I'd really like to make my computer come back to decent performance! Thanks in advance!

    Read the article

  • Exceptional slowdown of robocopy copying from VM to DFS array

    - by user1588867
    I've got an old win 2003 VM (VMware) on a blade cluster of VMs that I'm moving a considerable amount of files to our new DFS array. There are two main folders with about 1.7 million and half a million smaller files (letters, memos, and other smaller files) respectively. Total size is ~420 GB and ~100 GB. We're using the gui version of robocopy on the server to copy the files. We had initiated a file copy about a month ago to test the process and found that it was taking around 4 hours for the large file. Now that I'm in the process of actually switching the files over it has been taking 18-20 hours. Nothing has changed on the server side and nothing has changed on the settings of the copy (no logs, 1 retry with a wait of 1 second). Our intent is to shut off the share and force the copy over again to get all the files that have been left out of the copy due to being locked by users. I can't take a 20 hour outage to do that though. Does anyone have any theories about what could be causing such a delay for robocopy compared to previously shorter runs?

    Read the article

  • Getting Started in SuSE as an Ubuntu User

    - by Subhamoy Sengupta
    I am not a Linux newbie, but haven't touched SuSE in a very very long time (last time I tried it, it was SuSE 7!). Finally now I felt like giving it a try, and many things seem strange or unnecessarily complex. I have a series of questions. How do I ensure that my packages are uptodate? It sounds silly, but I tried the obvious methods already. I have disabled the default repositories that show up when you do zypper lr, and added Tumbleweed and packman repositories (Essentials, Multimedia, Extra). Then I did a sudo zypper ref --force and then sudo zypper dup, and it tells me many dependencies are not met. I have already added solder.allowVendorChange=true to /etc/zypp/zypp.conf, so it should not care which repository the latest versions are in, and just upgrade to it. Even when I chose to skip the packages with unmet dependencies, and seemed like quite a bit happened in the background, I opened Firefox afterwards and the version was 7! I am guessing things did not go as expected. But of course this is not a problem with SuSE, but I am not understanding the system right. How do I do it right? When I start typing arguments of a command, for example sudo zypper install, when I type sudo zypper ins and keep hitting TAB, nothing happens! It always worked in Ubuntu and I feel very uneasy with this. Is this how SuSE is supposed to be? When I try to install something, and I start writing its name, even though the package exists and I am sure of it, hitting TAB does not autocomplete it. This is also quite inconvenient. Why is it not happening? There are many things in SuSE that are really great, and I think I will stay with it and not go back to Ubuntu once I settle these very rudimentary issues. But right now they are giving me a lot of grief! Please help!

    Read the article

  • Terratec Cinergy Hybrid T USB XS is not recognized anymore on Mac OS X

    - by Gabble
    I have used Terratec Cinergy Hybrid T USB XS for years now, alongside with Elgato EyeTV software. I am a happy and completely satisfied user! Since a couple of weeks the USB sitck stopped working on my MacPro1,1 (OS version 10.6.3): EyeTV does not see any device attached, and actually, the green led on the stick stays off. It is not a USB port fault: I have unplugged any other USB/Firewire device and tried with different USB ports, to no avail (any other USB devices work as expected on any port) I have completely uninstalled EyeTV software, including preferences and system daemons/extensions, rebooted and reinstalled the latest EyeTV. No way. Reset the PRAM. Nope. Checked the Apple System Profiler - USB: No device attached, The MacPro does not see it at all. I need to say that: a) The device worked as a charm even with the latest OS 10.6.x (so it's not a OS upgrade cause). b) I have plugged the Terratec Cinergy Hybrid T USB XS to my MacBook5,1 where EyeTV is not installed and was never installed: The green led on the stick turns on, the growl bubble pops up, and the device is perfectly recognized by the system. Apple System Profiler says (sorry, Italian language): Cinergy Hybrid T USB XS (2882): ID prodotto: 0x005e ID fornitore: 0x0ccd Versione: 1.10 Numero di serie: 061102005755 Velocità: Fino a 12 Mb/sec Produttore: TerraTec Electronic GmbH ID posizione: 0x04100000 Corrente disponibile (mA): 500 Corrente necessaria (mA): 500 At this point I am pretty sure the Terratec stick is not damaged and there is something wrong with my MacPro. I kindly ask you: Is there a way to force my MacPro recognize the USB device? What can I check? Is there something that caches USB connection that can be reset? A OS reinstall would be the very last resort for me. Thanks in advance for any help you will offer!

    Read the article

  • Redirect from folder containing website

    - by Sam
    I have a website reached from this url: http://www.mysite.com/cms/index.php being served from this directory: public_html/cms/index.php In public_html I have this .htaccess RewriteRule (.*) cms/$1 [L] Which lets me get to the site like this: http://www.mysite.com/index.php But now if I reference the 'old' address, I'd like to redirect to the rewritten address with a permanent redirect code. for example: http://www.mysite.com/cms/?q=node/1 is redirected to... http://www.mysite.com/?q=node/1 How can I make this happen? EDIT: Also in the .htaccess file supplied with Drupal(cms), this is written. I've tried enabling it, but it doesn't seem to have any effect. # Modify the RewriteBase if you are using Drupal in a subdirectory or in a # VirtualDocumentRoot and the rewrite rules are not working properly. # For example if your site is at http://example.com/drupal uncomment and # modify the following line: # RewriteBase /drupal EDIT: Including more of my .htaccess file - seems relevant. # Block access to "hidden" directories whose names begin with a period. RewriteRule "(^|/)\." - [F] #Strip cms folder from url RewriteRule (.*) cms/$1 RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^ index.php [L] # Rules to correctly serve gzip compressed CSS and JS files. # Requires both mod_rewrite and mod_headers to be enabled. <IfModule mod_headers.c> # Serve gzip compressed CSS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.css $1\.css\.gz [QSA] # Serve gzip compressed JS files if they exist and the client accepts gzip. RewriteCond %{HTTP:Accept-encoding} gzip RewriteCond %{REQUEST_FILENAME}\.gz -s RewriteRule ^(.*)\.js $1\.js\.gz [QSA] # Serve correct content types, and prevent mod_deflate double gzip. RewriteRule \.css\.gz$ - [T=text/css,E=no-gzip:1] RewriteRule \.js\.gz$ - [T=text/javascript,E=no-gzip:1] <FilesMatch "(\.js\.gz|\.css\.gz)$"> # Serve correct encoding type. Header append Content-Encoding gzip # Force proxies to cache gzipped & non-gzipped css/js files separately. Header append Vary Accept-Encoding </FilesMatch>

    Read the article

  • How To Boot with "mem=1024m" Argument using GRUB - Ubuntu 10.04

    - by nicorellius
    I am still working on this question. This new one is a different question so I thought it would be good to post a new question. Is this the proper protocol or should I have just edited the other question? I'm running Ubuntu 10.04 with the kernel 2.6.32-22-generic on a Toshiba Satellite laptop. When I enter the GRUB menu (I have Ubuntu 9.10 installed as well), I can choose which kernel to boot. I use scroll down to the one I want and press "e" and I expect to be able to enter mem=1024m and force the kernel to use this much memory. But when I run cat /proc/meminfo or look in the process manager after booting wth this argument I still see all the RAM: ~2 GB. Am I using this boot argument incorrectly? The boot configuration (before I add anything) looks like this: insmod ext2 set root=(hd0,1) search --no-floppy --fs-uuid --set 10270f21-1c42-494b-bd3f-813c23f6d\ 518 linux /boot/vmlinuz-2.6.32-22-generic root=UUID=10270f21-1c42-494b-b\ d3f-813c23f6d518 ro quiet splash initrd /boot/initrd.img-2.6.32-22-generic The way I did this was that I added the mem=1024m after the last line and pressed Ctrl+x (Emacs save and boot the kernel) and the system booted. I tried adding mem=1024m to the end and the beginning of this list and it appeared to not change the RAM allocation.

    Read the article

  • Not able to connect to port different than 22 - OpenVPN

    - by t8h7gu
    I have OpenVPN network with 5 clients. Computer with Arch Linux which hosts OpenVPN server, It also hosts virtual machine with Computer with CentOS which is also connnected to OpenVPN subnet. Windows 8 which hosts virtual machine with CentOS. Both of them are connected to OpenVPN. Last one machine is virtual machine with CentOS which is hosted by computer with Ubuntu 14( which is not connected to OpenVPN. All machines in OpenVPN subnet are bolded. All phisical computers are in different networks. The problem is that when I use nmap to scan Windows and it's guest virtual machine it's saids that host seems down. When I force namp to scan specific port it shows filtered state: nmap -Pn -p 50010 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 19:49 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.11s latency). rDNS record for 10.8.0.3: node3.com PORT STATE SERVICE 50010/tcp filtered unknown Telnet also cannot connect to this port telnet n3 50010 Trying 10.8.0.3... telnet: Unable to connect to remote host: No route to host But ss on this host show's proper state of this port ss -anp | grep 50010 LISTEN 0 50 10.8.0.3:50010 *:* users:(("java",12310,271)) What might be possible reason of that and how to fix it? EDIT I've found that I am able to connect via telnet to ssh port: telnet n3 22 Trying 10.8.0.3... Connected to n3. Escape character is '^]'. SSH-2.0-OpenSSH_5.3 So it seems that it's not problem with Windows firewall. But I have no idea what it might be. Also nmap result for first thousand ports: nmap -Pn -p 1-1000 n3 Starting Nmap 6.46 ( http://nmap.org ) at 2014-06-07 20:08 CEST Nmap scan report for n3 (10.8.0.3) Host is up (0.49s latency). rDNS record for 10.8.0.3: node3.com Not shown: 999 filtered ports PORT STATE SERVICE 22/tcp open ssh Nmap done: 1 IP address (1 host up) scanned in 77.87 seconds

    Read the article

  • How to configure apache to basic authentication or allow when ntlm while proxying?

    - by trotzim
    Here is my study case: browser --- apache proxy --- ISA server --- internet The ISA server requires an authentication. The issue is to allow HTTPS through the two proxies. A configuration that works with HTTP is something like this: (yes, I don't want to use ProxyPass but ProxyRequests) <virtualhost *:8080> ... SetEnv auth-proxy-chain on ... ProxyRequests On ProxyRemote * http://isaproxy:80 ... <proxy *> AuthName "ISA server auth" AuthType Basic [here a module to authenticate] require valid-user Allow from all </proxy> ... </virtualhost> The user can authenticate on the apache proxy then the authentication chain is sent to the ISA server that allows the HTTP trafic. But, while the browser switchs to HTTPS, the ISA server "speaks" NTLM and breaks the authentication on the apache proxy. If I try to use the SSPI module (ntlm) with something like this: blablabla <proxy *> AuthName "ISA server auth" AuthType ntlm [ SSPI stuff ] Require valid-user Allow from all </proxy> The apache server reject the authentication (or the ISA server I don't really know). I use wireshark to look at the nominal process while using directly the ISA server as proxy. The first auth-chain is a BASIC type then it switchs to NTLM (and the challenge continues with NTLM). How should I configure apache that it transfers the NTLM authentication to the ISA proxy without checking it(*)? Or to rewrite headers to force BASIC authentication? (*) It seems not to be as easy as it seems...

    Read the article

  • Apache Redirect is redirecting all HTTP instead of just one subdomain

    - by David Kaczynski
    All HTTP requests, such as http://example.com, are getting redirected to https://redmine.example.com, but I only want http://redmine.example.com to be redirected. For example, requests for I have the following in my 000-default configuration: <VirtualHost *:80> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public Redirect permanent / https://redmine.example.com </VirtualHost> <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Here is my default-ssl configuration: <VirtualHost *:443> ServerName redmine.example.com DocumentRoot /usr/share/redmine/public SSLEngine on SSLCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key BrowserMatch "MSIE [2-6]" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown <Directory /usr/share/redmine/public> Options FollowSymLinks AllowOverride None Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/redmine-error.log CustomLog /var/log/apache2/redmine-access.log combined </VirtualHost> <VirtualHost *:443> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> . . . </VirtualHost> Is there anything here that is cause all HTTP requests to be redirected to https://redmine.example.com?

    Read the article

  • Dual Boot Installing Ubuntu 12.04 with Windows 7 (64) on a non UEFI system fails

    - by Randnum
    I cannot seem to install the correct boot loader for a non-UEFI firmware system. I'm trying to install Ubuntu 12.04 and Windows 7 (64) which are technically compatible with GPT but for windows only if the firmware is UEFI enabled. My system uses the old BIOS system and does not support UEFI. Therefore, whenever I finish my Ubuntu install and try to install Windows I get a "cannot install to GPT partition type" error. Even if I use Gparted to format a special NTFS file format for windows it can't handle the GPT partition style because it doesn't have UEFI. But my ubuntu install always forces GPT during installation and never asks if I want to install the old BIOS style MBR instead. How do I resolve this? Both OS's will install fine on their own the problem is when I try to install the second OS it doesn't recognize any of the other's partitions and tries to rewrite it's own on top of the other. I've tried both OS's first and always run into the same problem. Since there is no way to make Windows recognize GPT without upgrading my Motherboard how do I tell Ubuntu to use the old BIOS MBR on install? Do I have to download a special Ubuntu with a specific grub version? or should I manaually configure my partition somehow to force it not to use GPT? Thank you,

    Read the article

  • Apache mod_proxy

    - by mhouston100
    Uggh, I'm spewing that I can't figure this out, I'm so frustrated: <VirtualHost *:80> servername domain1.com.au ServerAdmin webmaster@localhost DocumentRoot /var/www/html ErrorLog ${APACHE_LOG_DIR}/error.log CustomLog ${APACHE_LOG_DIR}/access.log combined <Proxy *> Order Allow,Deny Allow from all </Proxy> RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) https://%{HTTP_HOST}/$1 [NC,R,L] </VirtualHost> <VirtualHost *:443> servername domain1.com.au SSLEngine on SSLCertificateFile /etc/apache2/ssl/owncloud.pem SSLCertificateKeyFile /etc/apache2/ssl/owncloud.key DocumentRoot /var/www/html </VirtualHost> <VirtualHost *:*> Servername domain2.com.au ProxyRequests Off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://192.168.1.12/ ProxyPassReverse / https://192.168.1.12/ </VirtualHost> Not sure if it's clear what I'm trying to do, but I've read and read and READ, I still can't figure it out. Basically I have a working Apache server with a rewrite to force HTTPS, as seen in the first two VirtualHost entries. I now have a webmail service I set up on another server, under another domain name, however I only have one incoming public IP address. So I'm trying to have any incoming requests for the second domain to be proxied to the other server to access the webmail, whether its port 80 or 443. IMAP and POP3 are no problems, I can just forward the ports directly to the correct server. The results of the above configuration is that requests to domain2.com.au (port 80 or 443) are forwarded to https://domain1.com.au. Am I headed in the right direction?

    Read the article

  • Mass-migrating from POP3 to Exchange 2010, how do I copy mailboxes?

    - by Erik P. Skaalerud
    I'm in the process of planning our migration from an internal hosted POP3-server (dovecot) to Exchange 2010. We're using Outlook 2003 for the moment, but will soon upgrade to Outlook 2010. The big problem is that we have about 50 computers here in our HQ, plus ~30 clients in branch offices (wich will get their Exchange migration later sometime). I'm the only IT personel, and having to go around and manually set up Outlook and copy over their PST contents is not a option I'm looking for. Some users have set outlook to keep messages for X number of days on the POP3 server, others have not. Using a POP3 connector to transfer over the mails is not a viable option. Here is what I've done so far: Created a transform for the Office 2003 administrative installation point Created a .PRF file to modify any existing e-mail account to switch over to Exchange (including the RPC-encrypt hotfix described in MSKB 2006508) Tested both transform and PRF, both works Created a test-OU and GPO containing the Office 2003 installation with transform applied, also works My big question is: How can I force Outlook to import any existing .PST into the new Exchange mailbox when the user starts up Outlook for the first time after the MST/PRF have been applied? Is this possible?

    Read the article

  • USB drive was bootable, but no longer boots

    - by i-g
    I'm trying to install a new OS onto a computer from a bootable USB stick. I previously installed Ubuntu Linux and it was a piece of cake -- I downloaded the ISO image, used UNetbootin to copy it to the USB drive and make it bootable, and that was that. Now, however, no matter what I try, I can't make the same USB drive bootable again! I've tried formatting it as FAT32 and NTFS. I've tried several different Linux distributions and Windows 7. I've tried using UNetbootin, Windows 7 USB Download Tool, WinToFlash, and manually making it bootable with diskpart/bootsect/bootrec. (Yes, I've tried bootsect /nt60 x: /force.) None of this seems to be working! When I try to boot from the drive, the machine reads from it (I can see the drive's LED blinking) and then gives me the same "Insert system disk and press Enter" message. (I've disabled booting from the hard drive.) Am I missing something I need to do to make the USB drive bootable again? I think it lost some pixie dust when I formatted it with the standard Windows formatting tool (it was quicker than deleting files), but I have no idea what it was or how to get it back. The USB drive in question is a SanDisk Cruzer 8GB SDCZ6. The computer I'm working on is running Windows Vista SP1.

    Read the article

  • FTP account ownership on vhost directory makes Apache not run website correctly

    - by CodeShining
    I've purchased a virtual server, where I'm given of a non-root sudo-enabled user. Actually I do need to create an FTP account that's not that sudo-able account, so I created a no-login account just for that purpose. I've set up VSFTPd correctly, also enabling the "userlist" feature, to specify which user are permitted to use FTP. Then I created an empty directory under my sudo-able account, and I gave ownership permissions to the second account, so to make it more easy to understand, let's say the main account (the one I do use to manage my VPS) is called ubuntu and the FTP-user is named ftpuser, I created a directory /home/ubuntu/mywebsite giving the ownership to ftpuser:ftpuser. Then I uploaded a worpdress website, whose default permissions are 755 and 644. The issue is that Apache is not given of any privilege to run the website. How can I make the website run properly, and which is the most secure? Should I run that virtualhost with another user (if it's possible)? Should I force the FTP user to use the www-data group (if that's possible) and run with permissions like 775 and 664? How can I solve this issue? Any help is appreciated, I'd like to run it using the default permissions, so any update won't break up anything (because of permissions reset).

    Read the article

  • Windows 7 explorer crashing trying to read external hard disk

    - by Mario De Schaepmeester
    I have a 1TB Western Digital hard drive which is almost full and last time I tried to plug it into my laptop, I got a Windows dialog saying "this hard drive needs to be formatted". I did not panic because I have experienced things like this before and I know it's often solved by simply re-inserting the drive. Now however, whenever I plug it in and try to browse it in explorer by going to "computer", the explorer process crashes after a while. I simply close explorer since it takes ages trying to read the disk and nothing happens. After searching on the internet, the best thing to do would be a chkdsk. I tried it via properties in explorer (which also took a good 5 minutes to open up), locks up as well, after waiting a couple of minutes it says there's no access to the disk so a chkdsk is not possible... I want to make clear that I always use safe removal before pulling out the USB cable. Last time however, safe removal just would not work and when trying to shut down Windows, the logoff screen just would not disappear (I've waited at least 10 minutes or so) and I powered off the PC by force. This may be the cause of the problems but the disk was still recognised immediately after that. I really don't want to format this thing because it contains C: clones of 3 computers and a lot of other stuff that I don't want to re-copy. What would be the best course of action? Update I got chkdsk working via the command line. I used the /F and /R options. I already got a bunch of lines saying "file record segment X is unreadable" or whatever it is in English, my OS is Dutch. It looks bad... Will chdsk repair these errors?

    Read the article

  • Kickstarting an Ubuntu Server 10.04 installation (DHCP fails)

    - by William
    I'm trying to automate the network installation of Ubuntu 10.04 LTS with an anaconda kickstart and everything seems to running except for the initial DHCP autoconfiguration. The installer attempts to configure the install via DHCP but fails on its first attempt. This brings me to a prompt where I can retry DHCP and it seems to always work on the second attempt. My issue is that this is not really automated if I have to hit retry for DHCP. Is there something I can add to the kickstart file so that it will automatically retry or better yet not fail the first time? Thanks. Kickstart: # System language lang en_US # Language modules to install langsupport en_US # System keyboard keyboard us # System mouse mouse # System timezone timezone America/New_York # Root password rootpw --iscrypted $1$unrsWyF2$B0W.k2h1roBSSFmUDsW0r/ # Initial user user --disabled # Reboot after installation reboot # Use text mode install text # Install OS instead of upgrade install # Use Web installation url --url=http://10.16.0.1/cobbler/ks_mirror/ubuntu-10.04-x86_64/ # System bootloader configuration bootloader --location=mbr # Clear the Master Boot Record zerombr yes # Partition clearing information clearpart --all --initlabel # Disk partitioning information part swap --size 512 part / --fstype ext3 --size 1 --grow # System authorization infomation auth --useshadow --enablemd5 %include /tmp/pre_install_ubuntu_network_config # Always install the server kernel. preseed --owner d-i base-installer/kernel/override-image string linux-server # Install the Ubuntu Server seed. preseed --owner tasksel tasksel/force-tasks string server # Firewall configuration firewall --disabled # Do not configure the X Window System skipx %pre wget "http://10.16.0.1/cblr/svc/op/trig/mode/pre/system/Test-D" -O /dev/null # Network information # Start pre_install_network_config generated code # Start of code to match cobbler system interfaces to physical interfaces by their mac addresses # Start eth0 # Configuring eth0 (00:1A:64:36:B1:C8) if ip -o link show | grep -i 00:1A:64:36:B1:C8 then IFNAME=$(ip -o link show | grep -i 00:1A:64:36:B1:C8 | cut -d" " -f2 | tr -d :) echo "network --device=$IFNAME --bootproto=dhcp" >> /tmp/pre_install_ubuntu_network_config fi # End pre_install_network_config generated code %packages openssh-server

    Read the article

  • Strategy for Incremental Datasource fetchings in Excel

    - by user1352530
    I am in an scenario with a table that is refresh by a third app every week. I need to keep accumulating all data in Excel, using an ODBC connection to the database. I am wondering Approach 1: Is there a way to force Excel to append results for every update (this update would be triggered according to a parameter that indicates week)? I tried to define the table for which the connection loads using a dynamic reference but once is anchored first time, table position is never redefined Approach 2: Use an ETL to accumulate all weekly results into a staging table and then connect Excel to it in real time. But, I would need a mechanism for caching old data, as I cannot grow exponentially the time Excel opens. Imagine after 10 years, Excel would need to update at opening 10 years fo data before showing it. Is there a way to store already fetched data and increment it at real time (when book is opened) by selecting new data (with a query/filter of something) Thanks EDIT: Maybe it's better to ask it that way: What is the optimal strategy for a table that keeps growing and needs to be read in real time by Excel? I just don't want to fetch absolutely all data after some months...

    Read the article

  • How does one debug Windows network share authentication?

    - by ajs410
    I have machine0 with 32-bit Vista, logged in as a domain user, running a VMWare image of 32-bit Vista, logged in as a local user, with the VM set to bridge the network. From an administrator account (called admin) within the VM, I try to access the hidden C$ share on machine0 (i.e. start - run - "\\machine0\C$\"). I get no prompts for credentials. Worse, machine0 has an admin account (different password), and machine0\admin gets locked out when VM\admin tries to access the network share. I get a message several seconds later, which feels like a cached credential failure leading to the lockout. I have checked several places for cached credentials; net use, Stored Usernames and Passwords, mapped shares. I rebooted (both machine0 and VM) to make sure the session was clear of any cached credentials. I can force net use to use my domain credentials when accessing machine0, and then I can see the share. I can also see shares that do not require credentials. I decided to try another machine on the network (machine1), 64-bit Vista, local user. This machine has no lockout policy, and after several seconds (feels like failed cached credentials again) it prompts me for credentials. After I enter them, it re-prompts me, saying "logon unsuccessful" (tried my domain credentials, and also machine1\admin's). Which is bogus, because I proceed to log on with remote desktop using the machine1\admin credentials. I have tried this on another machine (machine2, 64-bit Vista), running a copy of the same 32-bit VM, and I don't remember having this problem. machine0 has a fingerprint reader...could that try storing passwords and interfere? Are there any places I'm missing where there could be cached credentials? Is there a way to see what credentials are flying around when I try to connect?

    Read the article

  • Dual hard drive Windows 7 system, modified the registry to get programs to install on second drive, now IE doesn't work

    - by paul
    I have a dual hard drive Windows 7 system, Windows is installed on an SSD (C:) and I modified the registry to try to force programs to install on second HDD drive (another letter). The registry edits are pretty simple, just a few keys in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion to change the drive letter. For the most part the system is very fast and works great, but IE doesn't work anymore. With IE10, it opens for a flash with a white window then closes. I tried installed IE11 which opens a white window for a few seconds, doesn't respond, then crashes. I've tried all the solutions I could find. This includes resetting the IE settings, "uninstalling" and re-installing IE, which is just turning it on and off in "Turn Windows Features on or off", copying the Program Files\Internet Explorer files onto both/either drives, changing the registry keys back to use C:, lots of rebooting, and safe mode. Nothing has worked. I don't see errors in the event viewer, but I might not know what to look for. Any ideas on how to get IE running? I don't need IE for daily browsing, I just need it for cross-browser testing on sites I build and on the rare occasion a page only works in IE. I don't really want to use a virtual machine, but would be ok with something standalone like tredosoft's, but I'm not aware of something like that for current versions of IE.

    Read the article

  • Apache: getting proxy, rewrite, and SSL to play nice

    - by Rich M
    Hi, I'm having loads of trouble trying to integrate proxy, rewrite, and SSL altogether in Apache 2. A brief history, my application runs on port 8080 and before adding SSL, I used proxy to strip the 8080 from the url's to and from the server. So instead of www.example.com:8080/myapp, the client app accessed everything via www.example.com/myapp Here was the conf the accomplished this: ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> ProxyPass /myapp http://www.example.com:8080/myapp ProxyPassReverse /myapp http://www.example.com:8080/myapp What I'm trying to do now is force all requests to myapp to be HTTPS, and then have those SSL requests follow the same proxy rules that strip out the port number as my application used to. Simply changing the ports 8080 to 8443 in the ProxyPass lines does not accomplish this. Unfortunately I'm not an expert in Apache, and my skills of trial and error are already reaching the end of the line. RewriteEngine On RewriteCond %{HTTPS} off RewriteRule myapp/* https://%{HTTP_HOST}%{REQUEST_URI} ProxyRequests Off <Proxy */myapp> Order deny,allow Allow from all </Proxy> SSLProxyEngine on ProxyPass /myapp https://www.example.com:8443/mloyalty ProxyPassReverse /myapp https://www.example.com:8433/mloyalty As this stands, a request to anything on the server other than /myapp load fine with http. If I make a browser http request to /mypp it then redirects to https:// www.example.com:8443/myapp , which is not the desired behavior. Links within the application then resolve to https:// www.example.com/myapp/linkedPage , which is desirable. Browser requests (http and https) to anything one level beyond just /myapp ie. /myapp/mycontext resolve to https:// www.example.com/myapp/mycontext without the port. I'm not sure what other information there is for me to give, but I think my goals should be clear.

    Read the article

  • RAID5 issue after replacing motherboard and upgrading firmware

    - by 8steve8
    ok so ive had a 4x2TB(samsung HD204UI w/firmware patch) raid5 array working normally for about a month. It was in a h57 gigabyte motherboard using the intel raid with Windows 7 x64. Today I got an intel h67 motherboard, so I upgraded the intel raid drivers to 10.1.0.1008 from 9.6.0.1014, and I'm not sure if i checked after a reboot, but it caused no problems. I swapped in the new dh67 motherboard, and my array status was "failed". 2 of the 4 drives listed themselves as members, while the other two drives listed themselves as non-members. I tried going back to the old h57 mobo, and downgrading the raid drivers, but the issue remains. It's not port dependent, 2 of the drives always come up as non-members regardless of what port or motherboard they are plugged into. This screenshot should show that the SNs match, which raises the question why the software doesn't realize the drive is a member of the array. I'd like to know if anyone has experienced anything similar, and what should I do, can I force the drive to be recognized as a member (without wiping data)?

    Read the article

  • USB drive was bootable, but no longer isn't

    - by i-g
    I'm trying to install a new OS onto a computer from a bootable USB stick. I previously installed Ubuntu Linux and it was a piece of cake -- I downloaded the ISO image, used UNetbootin to copy it to the USB drive and make it bootable, and that was that. Now, however, no matter what I try, I can't make the same USB drive bootable again! I've tried formatting it as FAT32 and NTFS. I've tried several different Linux distributions and Windows 7. I've tried using UNetbootin, Windows 7 USB Download Tool, WinToFlash, and manually making it bootable with diskpart/bootsect/bootrec. (Yes, I've tried bootsect /nt60 x: /force.) None of this seems to be working! When I try to boot from the drive, the machine reads from it (I can see the drive's LED blinking) and then gives me the same "Insert system disk and press Enter" message. (I've disabled booting from the hard drive.) Am I missing something I need to do to make the USB drive bootable again? The USB drive in question is a SanDisk Cruzer 8GB SDCZ6. The computer I'm working on is running Windows Vista SP1.

    Read the article

  • Blue screen of Death on Install

    - by Toby Allen
    I have a machine with Windows Vista Installed. It has an Intel X25 SSD as the System Drive I want to reinstall (I plan to format and overwrite Vista) with XP. When I boot up using the Dell XP CD it loads the initial drivers then i get a Blue Screen. This is quite concerning. The installed OS works ok, but its giving problems so I want to remove it. Should I just format the SSD and try again? Will this make any difference? Can I do something to avoid hitting the Blue Screen? Its possible I had corrupt sectors on one of the other disks, will a new XP install use the System drive or drive 0? Can I force the install to use a specific drive when installing? Error: *** STOP: 0x0000007B (0xF78D2524,0x0000034,0x00000000,0x00000000) I never did find the answer, however I removed the SSD and tried to install on other disk - CRASH I disconnected the other disk and tried to install with only SSD plugged in - CRASH I removed 1 block of RAM - CRASH I used a windows 7 CD - NO CRASH

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >