Search Results

Search found 33336 results on 1334 pages for 'factory method'.

Page 470/1334 | < Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >

  • ISA 2006 SP1 - SSL Client Certificate Authentication in Workgroup Environment

    - by JoshODBrown
    We have an IIS6 website that was previously published using an ISA 2006 SP1 standard server publishing rule. In IIS we had required a client certificate be provided before the website could be accessed... this all worked fine and dandy. Now we wish to use a web publishing rule on ISA 2006 SP1 for this same website. However, it seems the client certificate doesn't get processed now, so of course the user can't access the website. I've read a few articles stating the CA for the certificate needs to be installed in the trusted root certificate authorities store on the ISA Server (i have done this), as well as installing the client certificate on the ISA Server (done as well). I have also verified that the ISA Server is able to access the CRL for our CA no problem... In the listener properties for the web publishing rule, under Authentication, and Client Authentication Method, there is an option for SSL Client Certificate Authentication... i select this, but it appears the only Authentication Validation Method selectable is Windows (Active Directory).... there is no Active Directory in this environment. When i configure the rule with the defaults, I then try to hit my website and it prompts for my certificate, i choose it and hit ok... then I'm given the following error Error Code: 500 Internal Server Error. The server denied the specified Uniform Resource Locator (URL). Contact the server administrator. (12202) I check the event logs on the ISA Server and in Security Logs, i see Event ID 536, Failure Aud. The reason: The NetLogon component is not active. I think this is pretty obvious since there is no active directory available. Is there a way to make this web publishing rule work using client certificates in this workgroup environment? Any suggestions or links to helpful documents would be greatly appreciated!

    Read the article

  • Can't login via ssh after upgrading to Ubuntu 12.10

    - by user42899
    I have an Ubuntu 12.04LTS instance on AWS EC2 and I upgraded it to 12.10 following the instructions at https://help.ubuntu.com/community/QuantalUpgrades. After upgrading I can no longer ssh into my VM. It isn't accepting my ssh key and my password is also rejected. The VM is running, reachable, and SSH is started. The problem seems to be about the authentication part. SSH has been the only way for me to access that VM. What are my options? ubuntu@alice:~$ ssh -v -i .ssh/sos.pem [email protected] OpenSSH_5.9p1 Debian-5ubuntu1, OpenSSL 1.0.1 14 Mar 2012 debug1: Reading configuration data /home/ubuntu/.ssh/config debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug1: Connecting to www.hostname.com [37.37.37.37] port 22. debug1: Connection established. debug1: identity file .ssh/sos.pem type -1 debug1: identity file .ssh/sos.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: RSA 33:33:33:33:33:33:33:33:33:33:33:33:33:33 debug1: Host '[www.hostname.com]:22' is known and matches the RSA host key. debug1: Found key in /home/ubuntu/.ssh/known_hosts:12 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: .ssh/sos.pem debug1: read PEM private key done: type RSA debug1: Authentications that can continue: publickey,password debug1: Next authentication method: password [email protected]'s password: debug1: Authentications that can continue: publickey,password Permission denied, please try again.

    Read the article

  • Headless VirtualBox VM NAT Network

    - by dirt
    I have a remote linux server accessible through SSH only. My goal is to host multiple Virtual Machines on this host server using VirtualBox. The host server has 1 IP address, so NAT will be used to route to the VMs for example 10022 will forward to server1:22 and 20022 will forward to server2:22. I have installed VirtualBox and copied a pre-configured CentOS VM to the host server. I start the VM, but cannot establish a connection to the server for example ssh -p 10022 127.0.0.1 times out. I've tried many things: Method 1: Copied existing .vdi, attached to new VM Method 2: Imported .Ova VM (thought it would help any MAC re-init issues?) NAT network type, tried natnet1 192.168/16 and 10.0/16 VBoxManage modifyvm "hermes.awoms.com" --natnet1 "192.168/16" Port forwarding with and without specifying VM ip in modifyvm --natpf1 command VBoxManage modifyvm "hermes" --natpf1 "guestssh,tcp,,10022,,,22" VBoxManage modifyvm "hermes" --natpf1 "guestssh,tcp,,10022,192.168.0.15,22" I can't see if VM is even booting (VBoxHeadless "hermes" --start & runs with no errors) I can't tell if VM is getting an IP address Is there anything else I can do to get more information from VirtualBox or the VM starting up when the only access I have is SSH?

    Read the article

  • I flashed my DS4700 with a 7 series firmware, now my DS4300 cannot read the disks I moved to that lo

    - by Daniel Hoeving
    In preparation for adding a number of 1Tb SATA disks to our DS4700 I flashed the controller firmware from a 6 series (which only supports up to 2Tb logical drives) to a 7 series (which supports larger than 2Tb logical drives). Attached to this DS4700 was a EXP710 expansion drawer that we had planned to migrate out to our co-location to allieviate the storage issues we were having there. Unfortunately these two projects were planned in isolation to one another so I was at the time unaware of the issue that this would cause. Prior to migrating the drawer I was reading the "IBM TotalStorage DS4000 EXP700 and EXP710 Storage Expansion EnclosuresInstallation, User’s, and Maintenance Guide" and discovered this: Controller firmware 6.xx or earlier has a different metadata (DACstore) data structure than controller firmware 7.xx.xx.xx. Metadata consists of the array and logical drive configuration data. These two metadata data structures are not interchangeable. When powered up and in Optimal state, the storage subsystem with controller firmware level 7.xx.xx.xx can convert the metadata from the drives configured in storage subsystems with controller firmware level 6.xx or earlier to controller firmware level 7.xx.xx.xx metadata data structure. However, the storage subsystem with controller firmware level 6.xx or earlier cannot read the metadata from the drives configured in storage subsystems with controller firmware level 7.xx.xx.xx or later. I had assumed that if I deleted the logical drives and array information on the EXP710 prior to migrating it to the DS4300 (6.60.22 firmware) this would satisfy the above, unfortunately I was wrong. So my question is a) Is it possible to restore the DAC information to its factory settings, b) What tool(s) would I use to accomplish this, or c) is this a lost cause? Daniel.

    Read the article

  • Can't access my accelerated hard disk from msdos after installing linux on ssd cache

    - by Chibueze Opata
    I mistakenly installed Linux Mint on my ssd (forgot my PC actually came with one), when it detected a ~31GiB disk that it wanted to install to, I was a bit confused since I had brought out 30Gb in my primary disk for it, but I clicked continue. After installation, I tried to boot back into my Windows and it brought out some Intel Raid Disk Utility stuff saying I should disable acceleration on a disk something couldn't be found, I canceled it but whatever I tried, recovery tools, setups etc, I couldn't just access the drive which was apparently using the SSD as cache. Since then I've been stuck. I tried setting the 'raid' flag to the disk from 'gParted', still I couldn't. I tried the diskraid utility from windows recover disk, it said it couldn't detect any raid, diskpart sees the partition but doesn't see the volume, when I remove the raid flag, it sees the volume as one of raw type, and I can't access anything. I can however mount the drive from terminal in Mint and access my files, but I don't have any backup media at the moment so I can do a factory re-install. Please how do I go about solving the issue, precisely I would like to know how to boot into the drive again. Thanks!

    Read the article

  • Tell Tomcat to drop requests instead of dying "All threads (150) are currently busy"

    - by Nicolas Raoul
    My Tomcat 6.0.26 sometimes dies saying: SEVERE: All threads (150) are currently busy, waiting. Increase maxThreads (150) or check the servlet status ... then Tomcat shuts down, and users can't access the webapp until I restart Tomcat manually. Some of the threads indeed take a long time to execute, it is by-design, not a thread-gone-wild problem. I know I could increase maxThreads, but that is not a viable solution, because the server might receive requests even more requests. QUESTION: Instead of dying, can I tell Tomcat to just drop requests when maxThreads is reached and the AJP/1.3 backlog is full? Below is my server.xml in any case: <?xml version='1.0' encoding='utf-8'?> <Server port="8005" shutdown="SHUTDOWN"> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <Listener className="org.apache.catalina.core.JasperListener" /> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <GlobalNamingResources> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <Service name="Catalina"> <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" minSpareThreads="100"/> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" enableLookups="false" useBodyEncodingForURI="true" backlog="150" maxThreads="150" executor="tomcatThreadPool" keepAliveTimeout="5000" connectionTimeout="300000" /> <Engine name="Catalina" defaultHost="localhost" jvmRoute="ecm1"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> </Server>

    Read the article

  • Mac OSX and root login enabled

    - by reza
    All I am running OSX 10.6.8 I have enabled root login through Directory Utility. I have assigned a password. I get an error when I try to ssh root@localhost. ssh -v root@localhost OpenSSH_5.2p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Reading configuration data /Users/rrazavipour-lp/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /Users/rrazavipour-lp/.ssh/identity type -1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_rsa type 1 debug1: identity file /Users/rrazavipour-lp/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.2 debug1: match: OpenSSH_5.2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'localhost' is known and matches the RSA host key. debug1: Found key in /Users/rrazavipour-lp/.ssh/known_hosts:47 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: publickey debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_dsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Trying private key: /Users/rrazavipour-lp/.ssh/identity debug1: Offering public key: /Users/rrazavipour-lp/.ssh/id_rsa debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Next authentication method: keyboard-interactive Password: debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: Authentications that can continue: publickey,keyboard-interactive debug1: No more authentication methods to try. Permission denied (publickey,keyboard-interactive). What I am doing wrong? I know I have the password correct.

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • java.lang.OutOfMemoryError: unable to create new native thread

    - by Brad
    I consistently get this exception when trying to run my Junit tests on my mac: java.lang.OutOfMemoryError: unable to create new native thread at java.lang.Thread.start0(Native Method) at java.lang.Thread.start(Thread.java:658) at java.util.concurrent.ThreadPoolExecutor.addIfUnderMaximumPoolSize(ThreadPoolExecutor.java:727) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:657) at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:197) at com.google.appengine.tools.development.ApiProxyLocalImpl$PrivilegedApiAction.run(ApiProxyLocalImpl.java:184) at java.security.AccessController.doPrivileged(Native Method) at com.google.appengine.tools.development.ApiProxyLocalImpl.doAsyncCall(ApiProxyLocalImpl.java:172) at com.google.appengine.tools.development.ApiProxyLocalImpl.makeAsyncCall(ApiProxyLocalImpl.java:138) The same set of unit tests pass perfectly fine on ubuntu and windows. Some information about my system resources on the mac: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 1 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 266 virtual memory (kbytes, -v) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07-334-10M3326) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02-334, mixed mode) The reason I dont think this is an application issue is because the same tests pass in different environments. I have tried setting heap to 1024m, 512m and setting the stack to 64k and 128k (and each of these combinations) with no luck. My open files was originally 256 and I have bumped this to 1024. I have been googling around for a bit and all posts say to decrease heap size and increase stack size but that doesnt seem to help. Anyone have anymore ideas? EDIT: Here are is some environment information on my ubuntu box: $ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 20 file size (blocks, -f) unlimited pending signals (-i) 16382 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) unlimited virtual memory (kbytes, -v) unlimited file locks (-x) unlimited $ java -version java version "1.6.0_24" Java(TM) SE Runtime Environment (build 1.6.0_24-b07) Java HotSpot(TM) 64-Bit Server VM (build 19.1-b02, mixed mode)

    Read the article

  • Graphic Design in Outlook HTML Emails

    - by PhilPursglove
    At the moment we are creating artwork in Word and saving it as an HTML file. Opening up a new email, clicking insert on menuclicking ‘File’Selecting HTML file and choosing insert as text. The word document is then embedded into the email and we can create HTML links from there. The problem with this method is we are limited to what we can create visually in Word. The artwork just does not look professional enough and we find that sometimes the headers or footers do not appear or do not stay in their correct position. What I would like to do is to be able to start in Adobe InDesign (the graphics package we use). So far I have been able to create artwork in InDesign and create buttons and hyperlinks in InDesignExport it as a pdf, maintaining the hyperlinksSave as HTML documentOpen new emailInsert HTML file choosing insert as text. The problem with this method is that the images move about, the text is all different sizes, but on the plus side, the hyperlinks have been retained. So I am almost there, but not quite. Can anyone suggest what I need to do to get the design to display 'correctly' in Outlook.

    Read the article

  • Resolving a BSOD/CPU/GPU issue...

    - by Christian Sciberras
    Hello all, I'm getting a BSOD / system crash (sometimes the PC just quits without a BSOD). Hardware Specifications cpu: i7 920 2666MHz / 8 cores (not OCed afaik) mobo: Asus P6T SE ram: 2x Corsair CM3X2G1333C9 (64bit DDR3 667MHz) gfx: ATI Radeon HD 5970 1GB (XFX HD5970 BE) os: Windows 7 Ultimate 64 bit (legit) All bios, firmware and drivers are all up to date (as of today). Symptoms Sometimes the PC runs smoothly, sometimes I get this BSOD. The BSOD always happens when I'm doing something related to graphics, such as viewing a video or playing a game. I get to know about the imminent BSOD ~10 seconds earlier; the PC starts freezing occasionally but increasing in frequency and length of lag (I noticed processor usage in creased from Process Monitor). I've tweaked BIOS settings occasionally but afaik, it was in vain. A day or so ago, I reset it to factory settings. BSOD contents The computer has rebooted from a bugcheck. The bugcheck was: 0x00000101 (0x0000000000000019, 0x0000000000000000, 0xfffff88001f35180, 0x0000000000000004). 15-12-2010 A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Internal Timer Error Processor ID: 4 23-12-2010 A fatal hardware error has occurred. Reported by component: Processor Core Error Source: Machine Check Exception Error Type: Internal Timer Error Processor ID: 2 Important The interesting thing is that although the event log (and BSOD screen) blame a "secondary processor", Windows Action Center sometimes blamed the GFX driver (for the same error). Also It is interesting to note that after hibernating my PC, I always get the BSOD.

    Read the article

  • Android failure to boot on LG [migrated]

    - by Ukavi
    I need to recover data from my AT&T LG Thrill Android Phone Background: My AT&T LG Thrill phone's battery died a couple of days ago because I forgot to charge it. When I charged the phone and tried to turn it on, it showed the LG logo followed by the dropping balls and the AT&T "Rethink Possible" screen. I then get a mesage that the Application Google Services Framework has crashed and the phone goes into a loop with the dropping balls showing again followed by "Rethink Possible" screen. This sequence repeats itself over and over and the phone does not get out of this loop. I have been able to go into the recovery screen (both Safe Mode and the Android Recovery Service) and have cleared cache, etc. However, I DO NOT want to wipe user data and restore to factory settings as this will wipe all of my data (pictures, application data, etc). Solution Needed: I need a suggestion to a way of accessing my data so that I can back it up onto an SD card/computer. I DO NOT want to root the phone as this may void the warranty. What I'm looking for is a way of perhaps putting the original flash image on the micro SD card and then have the phone read that image. Or some other similar solution that will get the phone out of this loop and allow me to get to the data.

    Read the article

  • what web based tool, to allow a non-technical user to manage authorized keys files on a Linux (fedora/centos/ubuntu/debian) server

    - by Tom H
    (Edit: clarification below) We have a number of groups of developers that change frequently, and a security policy to require individual logins to servers using rsa or dsa public keys, which is achieved via the standard method of adding id_dsa.pub to their authorized keys file. I am using chef to sync the user accounts across machines, however our previous method of using webmin to manage the user passwords is not designed for key based auth, and hence is not easy to use for non-technical users. The developers are logging in from the WAN using ssh, they can either provide their own key, or an administrator will send them a private key. The development machines are located in the cloud and we have a single server available to host the master set of accounts. Obviously I could deploy ldap or other centralised authentication system, but that seems a bit over blown when webmin worked well for the simple case. It is easy to achieve synchronised users, groups and passwords across a bunch of low security development boxes using webmin clustered users and groups. However looking at the currently installed webmin it is not so easy to create the authorized keys as it is to create user accounts and passwords. (its possible, but its not easy - some functionality is in the usermin module, or would required some tedious steps) Ideally I'd like a web interface that is pretty much dedicated to creating users and groups, and can generate key pairs on the fly, and can accepted pasted in public keys to add to the users authorized keys file. If the tool sync'ed the users and keys as well, that would be great, but I can use chef to do that part if the accounts are created correctly on the "master" server.

    Read the article

  • cfengine3 file_copy only on source side change

    - by megamic
    I am using the 'digest' copy method for all file copy promises, because of the way we package and deploy software, I cant rely on mtime for the criteria for updating files. For various reasons, I am not employing the client-server approach with a central configuration server: rather we package and deploy our entire configuration module to each server, so from cf-engine's perspective, the source and target are local on the server it is running. The problem I am having with this approach is that the source will always update the target when they differ - which is what I want most of the time, usually because the source has been updated. However, like many other cfengine users, we are running an operational environment, where occasionally emergency fixes have to be applied immediately - meaning we don't have time to rebuild and redeploy a configuration module, and the fix will often be applied by deploying a tarball with specific changes. Of course this is problematic if cf-engine comes along 5 mintues later and reverts the changes. What we would like is to be able to make small, incremental changes to our servers, without them being reverted, until the next deployment cycle at which time the new source files would be copied. We do not consider random file corruption or mistaken changes to involve enough risk to warrant having cfengine constantly revert deployments to their source copy - the ability to deploy emergency fixes and have them stay that way until the next deployment would be of much greater value and utility. So, after all that, my question is this: is cf-engine capable of detecting whether it was the source or target that changed when the files differ, and if so, is their a way to use the 'digest' copy method but only if the source side changed? I am very open to other ideas and approaches as-well, as I am still quite new to this whole configuration management thing.

    Read the article

  • File corruption after copying files in Windows 7 64 bit using two methods

    - by DustByte
    I have 5000 pictures and other files in a directory taking up 35 GB. I want to duplicate this directory. Method 1: I do a simple copy and paste of the directory in explorer. I have the habit of checking the checksums after copying important files. In this case I noticed that around 2000 files failed the MD5 test. At a closer inspection of a randomly chosen JPEG with different checksums it turns out that some XMP metadata had changed. In particular, the tag <MicrosoftPhoto:DateAcquired> had changed the date from 2009 to today (possibly around the time I was copying the files). I have no idea what triggered this XMP data to be changed and exactly when it was changed and why for these particular files, but at least it seems to explain the checksum discrepancy. Method 2: As I want the exact files to be duplicated, I tried the program FreeFileSync to mirror the directory, hoping no XMP metadata would mysteriously change. A checksum test in addition to a thorough file comparison test in FreeFileSync lead to two similar but yet different results: 31 files fail the checksum test, 23 files fail the file comparison test. The smaller set is not entirely contained in the bigger set, although many files occur in both. What is alarming here is that not only JPEGs are flagged as altered but also som AVIs, MPGs and a large 7-zip file. Closer inspection of a JPEG indicates that it is indeed corrupt: the bottom half of the picture is simply plain gray. Due to the size of the 7-zip file, I have not been able to pin down the discrepancy. Note, in both methods, every file has its correct file size after being copied. Question: Any thoughts on what is possibly going on here? I have never had this problem before, and I am now terrified that files get corrupted after simple actions like copy/paste and file sync. Even if I manage to successfully copy the files somehow, I would still like an explanation to this.

    Read the article

  • Windows XP restarting repeatedly after a fresh restore

    - by DWilliams
    I don't normally deal with Windows but someone wanted me to fix their computer with Windows XP on it. It was just restarting every time it tried to boot. It would not go into safe mode, the result was the same regardless of the selected mode. The computer is like 4 years old and has been running the same installation for that entire time, so I figured the easiest solution was just to back up their files and re-install. I loaded the computer up with a live CD and copied their files off to a USB drive, then proceeded to run HP's "factory restore" feature (which I'm not particularly fond of, I'd rather have a disk to install from than reload all the crapware HP gets paid to install for you). It restored, and I put all their files back, installed their programs, and started the full windows update process. Everything seemed great so I left and told them what to do once it finished. A few hours pass, and my phone rings. Apparently it started doing the exact same thing as before once the updates finished. I don't have the computer sitting in front of me now so I can't really provide any more information than that. What could be causing this and, more importantly, how do I fix it? The fact that the same problem resurfaced after the restore makes me think it's either a hardware problem or an update breaking the computer.

    Read the article

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • Problems with word completion on Windows Mobile

    - by Rowland Shaw
    For "some reason" the word completion function on my windows mobile phone (HTC Diamond, rebadged as a T-Mobile MDA Compact IV (UK) running WM6.1 with HTC Touch Flo 3D) hasn't worked since one of my firends was taking a look at the phone (I remember him bitching about it being too obtrusive for him, as an iPhone fanboy). I've checked all the obvious settings ( Start Input Word Completion ) and everything looks set there; I tried a hard reset, to no avail and even tried upgrading the ROM t the latest from my network provider. I even tried walking into the store where I bought the phone, and the staff couldn't fix the issue. I still have my old handset, which also runs WM6.1 (a T-Mobile MDA Compact III (UK), albeit without Touch Flo 3D), and the word completion works on there, so I'm a little confused as to why I can't get it to work again on my new handset. Can anybody identify why this might not be working, or help me fix it? Edit: Even "Touch Input Settings" has both "Word Completion in T9 mode" and "Word Completion in ABC mode" checked. The full qwerty keyboard option is in T9 mode, and word completion works for this input method; It still does not work for my preferred, "Letter Recogniser" method.

    Read the article

  • Docs for OpenSSH CA-based certificate based authentication

    - by Zoredache
    OpenSSH 5.4 added a new method for certificate authentication (changes). * Add support for certificate authentication of users and hosts using a new, minimal OpenSSH certificate format (not X.509). Certificates contain a public key, identity information and some validity constraints and are signed with a standard SSH public key using ssh-keygen(1). CA keys may be marked as trusted in authorized_keys or via a TrustedUserCAKeys option in sshd_config(5) (for user authentication), or in known_hosts (for host authentication). Documentation for certificate support may be found in ssh-keygen(1), sshd(8) and ssh(1) and a description of the protocol extensions in PROTOCOL.certkeys. Is there any guides or documentation beyond what is mentioned in the ssh-keygen man-page? The man page covers how to generate certificate and use them, but it doesn't really seem to provide much information about the certificate authority setup. For example, can I sign the keys with an intermediate CA, and have the server trust the parent CA? This comment about the new feature seems to mean that I could setup my servers to trust the CA, then setup a method to sign keys, and then users would not have to publish their individual keys on the server. This also seems to support key expiration, which is great since getting rid of old/invalid keys is more difficult then it should be. But I am hoping to find some more documentation about describe the total configuration CA, SSH server, and SSH client settings needed to make this work.

    Read the article

  • HP Pavillion DV6500 recovery disk failure

    - by Scott W
    I recently attempted to re-install Windows Vista on an HP Pavillion DV6500 using the factory recovery DVD's, but encountered a strange problem. When the recovery disk attempted to reformat the hard disk, it failed at 22%. The error message provided was not very informative, just the error code "0x400110020000 1005". A google search turned up some people with a similar problem who asserted that HP has been know to ship corrupted recovery DVDs. The recovery disk did manage to reformat the the recovery partition before failing though, so recovering from the partition is no longer an option. It would be possible to reinstall from an off-the-shelf retail copy of Vista and then pull the drivers from HP's website, but I don't have access to a copy of Vista, and it would really be outrageous to have to purchase a new OS when I have a perfectly valid license already. Thought about biting the bullet and upgrading to Windows 7, but my understanding is that without Vista installed I'd be unable to use the upgrade version, and be forced to purchase the more expensive non-upgrade retail copy (!). Can anyone suggest a possible solution to this Catch-22? I've run out of ideas.

    Read the article

  • Can I tell if crashplan has backed up a particular file in a particular state?

    - by Chris Cogdon
    I would like to be able to tell, programmatically, if CrashPlan has backed-up a particular file, including the current updates to that file. I.e., that the current contents of a file are backed up. It's relatively easy to tell when CrashPlan last backed up a file: its file name appears in /usr/local/crashplan/log/backup_files.log.0, and with some accuracy, I could compare the backup time with the last modification time to the file, but that method appears to be somewhat dubious. A couple of methods I could think of, but I don't know how: Compare the current file to CrashPlan's metadata about that file. This needs knowledge about the format of CrashPlan's "cache" files as well as the hashing system used. This might be achievable through the CLI, but the CLI is just a portal into the GUI, and I need something that's scriptable. Restore the file to a temporary directory, and compare it. Unfortunately, there is no CLI to do restores; the GUI is the only way. I'll describe what I'm trying to achieve. It would be nice to know how to do the above, even if there are alternative methods for the following: I'm using CrashPlan for continuous backups to my PostgreSQL database, using WAL archives. In the current configuration, the archive command copies the files to an archive directory, which is backed up by CrashPlan. Every so often I manually confirm (or just trust) a group of WALs are backed up, and remove them from the archive directory, and occasionally do a restore through the GUI to ensure I can retrieve current and "deleted" WALs. The xlog directory is backed-up, too, so I have a good chance of doing a near-full restore even if a particular xlog hasn't been archived by PostgreSQL yet. I'd like to be able to automate this process, which necessitates either confirming the backup status and recency, or automating a restore for comparison purposes. (As a bonus, if the method is trustworthy, I could turn the "archive_command" from "copy to archive directory" into "confirm CrashPlan has backed up the current version", and do away with the archive directory completely). (And, yes, I'm doing regular pg_dumpall's, in addition to the above.)

    Read the article

  • How to set only specific nginx server block into maintenance mode programmatically

    - by Ville Mattila
    I am looking for a solution to automate one of our application's deployment process. In the beginning of deployment, I would like to programmatically set the specified server into maintenance mode and finally after the deployment has been completed, remove the maintenance mode flag from the nginx server. By maintenance mode, I mean that nginx should response with HTTP Response Code 503 to all the requests (with possible custom page). I know how to set the server block to respond with 503 code (see http://www.cyberciti.biz/faq/custom-nginx-maintenance-page-with-http503/) but the question is about how to do this programmatically and most efficiently. Two options have came to my mind: Option 1: At the beginning of the deployment process, write a maintenance file into document root and conditionally check an existence of the maintenance file in nginx server config: server { if (-f $document_root/in_maintenance_mode) { return 503; } } This method contains certain overhead as the file existence is checked for each request. Is it possible to check the file existence only when loading the nginx config? Option 2: Deployment script replaces the whole nginx server configuration file with a maintenance version and swaps it back in the end of the deployment. If this method is used, I am concerned about possible other automation processes like puppet that may be override the maintenance configuration file.

    Read the article

  • How to discover true identity of hard disk?

    - by F21
    I have 2 fake external hard drives that claim to have a storage capacity of 2TB. I pulled the enclosure apart and the hard drives seems to be refurbished ones with their labels replaced as Barracuda LP 2000 GB labels (the serial numbers on both labels are the same). Interestingly, one of the drives have 160G written on it with pencil. However, the counterfeiters seem to have done something to the firmware, because CrystalDiskInfo reports them as 2TB ST2000DL003 drives. I then delete the 1.81 TB partition in Windows disk management and tried to create a new one and format it. Once I get to this point, the drives would make some noise that is common to dying drives. I am not interested in using these drives for production, but I am interested in finding the true identity (manufacturer/serial number/model number, etc) and restoring it to their factory defaults with the right capacity. Can this be done without any special equipment? This would be an interesting learning exercise. Some pictures of the drives in question: Here are the screens from CrystalDiskInfo: Note the serial numbers are the same (these are 2 different drives!). How is this done? Did they have to tamper with the controller board? I would assume that changing the firmware doesn't change the serial number at all.

    Read the article

  • Oracle with Kerberos authentication and Windows 2003 Server as KDC

    - by Supaplex
    Hello everyone. I am running Oracle 10.2 on a Windows 2003 Server SP2 which is also the domain controller on the network. I wish to switch authentication method from NTS to Kerberos. I have spent a lot of time trying to configure Oracle with Kerberos authentication from the Oracle Advanced Security option from the Net Manager utility. I have disabled NTS so Kerberos is promoted as the preferred authentication method. But as soon as the configuration is saved from Net Manager and I restart the Oracle server service, Oracle will not start. I don't know what Oracle is complaining about, because I don't know where to look for the Oracle error log. My first question is: how can I figure out what's bugging Oracle? My second question: is there a good tutorial for setting up Oracle on a Windows 2003 with Kerberos Authentication, where the Windows 2003 Server is the KDC? Maybe there is a book I can get? I have read Oracles own guide, but it is mostly for Linux/Unix. Thanks a lot!

    Read the article

  • Dell XPS 17 Laptop USB SS Ports don't work until machine rebooted

    - by Matt
    I have a new Dell XPS 17 laptop and have an issue that the two USB SS ports on the rear of the machine are not recognising anything when I start-up. Have tried mouse, keyboard, headset, USB display. If I plug a USB hub in the power light on the hub lights (presume this is hard-wired) but a mouse plugged into the hub does not light up. If I restart the pc the ports work fine. Also the other two USB ports work fine all the time. I have switched off the power saving setting on the device manager (Power managementAllow the computer to turn off this device to save power) and also in the advanced power settings (USB Settings USB Selective suspend setting:disabled) I have also disabled USB powershare in the BIOS (not sure what this actually does!) This is becoming very annoying having to restart the machine every day before I can start work Running Win 7 Pro x64 EDIT: I have found that setting USB Emulation to Disabled has helped the devices to be recognised during start up but they keep dropping out completely when in use EDIT 2: I contacted Dell about this issue who told me to update the BIOS driver, when I asked if I should downgrade from the factory installed A13 to the A12 posted on the dell website they said no. However there is now an A14 available on the website has anyone tried this?

    Read the article

< Previous Page | 466 467 468 469 470 471 472 473 474 475 476 477  | Next Page >