Search Results

Search found 28325 results on 1133 pages for 'test cases'.

Page 910/1133 | < Previous Page | 906 907 908 909 910 911 912 913 914 915 916 917  | Next Page >

  • Hadoop initscript askes password

    - by Ramesh
    I have installed hadoop on my ubuntu 12.04 single node .I am trying to execute an init script to make the hadoop run on start up but it asks password every time i execute. #!/bin/sh ### BEGIN INIT INFO # Provides: hadoop services # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Description: Hadoop services # Short-Description: Enable Hadoop services including hdfs ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HADOOP_BIN=/home/naveen/softwares/hadoop-1.0.3/bin NAME=hadoop DESC=hadoop USER=naveen ROTATE_SUFFIX= test -x $HADOOP_BIN || exit 0 RETVAL=0 set -e cd / start_hadoop () { set +e su $USER -s /bin/sh -c $HADOOP_BIN/start-all.sh > /var/log/hadoop/startup_log case "$?" in 0) echo SUCCESS RETVAL=0 ;; 1) echo TIMEOUT - check /var/log/hadoop/startup_log RETVAL=1 ;; *) echo FAILED - check /var/log/hadoop/startup_log RETVAL=1 ;; esac set -e } stop_hadoop () { set +e if [ $RETVAL = 0 ] ; then su $USER -s /bin/sh -c $HADOOP_BIN/stop-all.sh > /var/log/hadoop/shutdown_log RETVAL=$? if [ $RETVAL != 0 ] ; then echo FAILED - check /var/log/hadoop/shutdown_log fi else echo No nodes running RETVAL=0 fi set -e } restart_hadoop() { stop_hadoop start_hadoop } case "$1" in start) echo -n "Starting $DESC: " start_hadoop echo "$NAME." ;; stop) echo -n "Stopping $DESC: " stop_hadoop echo "$NAME." ;; force-reload|restart) echo -n "Restarting $DESC: " restart_hadoop echo "$NAME." ;; *) echo "Usage: $0 {start|stop|restart|force-reload}" >&2 RETVAL=1 ;; esac exit $RETVAL Please tell me how to run hadoop without entering password.

    Read the article

  • KVM Guest Reboot Loop

    - by javano
    I have been pulled into a situation where a KVM server (CentOS 6.2) lost power and upon reboot one of the guests hasn't started up again (XP SP3). I have SSH'ed in and someone must have changed something relating to the hyper visors prior to the power loss, but not rebooted all the guests. This particular guest wouldn't start because it was configured to use /usr/bin/qemu-system-x86_64 which isn't there now (assuming it was before?). I changed it to use /usr/libexec/qemu-kvm as this is what all the other guests on this server seem to be using, and its booting up. Using virt-manager on my local machine I can connect to the display of the XP machine and it gets as far as this screen; http://support.gateway.com/emachines/issues/2-1131285152-01.gif The problem I face now, is that which ever option I choose the machine just reboots. So it's and endless loop. I thought that perhaps a file system error maybe present due to the unclean shutdown. There is an XP SP3 ISO mounted under the guest, which I booted from in an attempt to access the recovery tools, but I don't have the Administrator password! I am out of ideas, and it's turning out to be quite the conundrum. Should I use a 3rd party live CD to test the FS for errors? How else can I trouble shoot these restarts?

    Read the article

  • Wireshark WPA 4-way handshake

    - by cYrus
    From this wiki page: WPA and WPA2 use keys derived from an EAPOL handshake to encrypt traffic. Unless all four handshake packets are present for the session you're trying to decrypt, Wireshark won't be able to decrypt the traffic. You can use the display filter eapol to locate EAPOL packets in your capture. I've noticed that the decryption works with (1, 2, 4) too, but not with (1, 2, 3). As far as I know the first two packets are enough, at least for what concern unicast traffic. Can someone please explain exactly how does Wireshark deal with that, in other words why does only the former sequence work, given that the fourth packet is just an acknowledgement? Also, is it guaranteed that the (1, 2, 4) will always work when (1, 2, 3, 4) works? Test case This is the gzipped handshake (1, 2, 4) and an ecrypted ARP packet (SSID: SSID, password: password) in base64 encoding: H4sICEarjU8AA2hhbmRzaGFrZS5jYXAAu3J400ImBhYGGPj/n4GhHkhfXNHr37KQgWEqAwQzMAgx 6HkAKbFWzgUMhxgZGDiYrjIwKGUqcW5g4Ldd3rcFQn5IXbWKGaiso4+RmSH+H0MngwLUZMarj4Rn S8vInf5yfO7mgrMyr9g/Jpa9XVbRdaxH58v1fO3vDCQDkCNv7mFgWMsAwXBHMoEceQ3kSMZbDFDn ITk1gBnJkeX/GDkRjmyccfus4BKl75HC2cnW1eXrjExNf66uYz+VGLl+snrF7j2EnHQy3JjDKPb9 3fOd9zT0TmofYZC4K8YQ8IkR6JaAT0zIJMjxtWaMmCEMdvwNnI5PYEYJYSTHM5EegqhggYbFhgsJ 9gJXy42PMx9JzYKEcFkcG0MJULYE2ZEGrZwHIMnASwc1GSw4mmH1JCCNQYEF7C7tjasVT+0/J3LP gie59HFL+5RDIdmZ8rGMEldN5s668eb/tp8vQ+7OrT9jPj/B7425QIGJI3Pft72dLxav8BefvcGU 7+kfABxJX+SjAgAA Decode with: $ base64 -d | gunzip > handshake.cap Run tshark to see if it correctly decrypt the ARP packet: $ tshark -r handshake.cap -o wlan.enable_decryption:TRUE -o wlan.wep_key1:wpa-pwd:password:SSID It should print: 1 0.000000 D-Link_a7:8e:b4 - HonHaiPr_22:09:b0 EAPOL Key 2 0.006997 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 3 0.038137 HonHaiPr_22:09:b0 - D-Link_a7:8e:b4 EAPOL Key 4 0.376050 ZyxelCom_68:3a:e4 - HonHaiPr_22:09:b0 ARP 192.168.1.1 is at 00:a0:c5:68:3a:e4

    Read the article

  • Cherrypy web application won't communicate outside localhost via VPN

    - by Geoffrey Shea
    I'm trying to run a Python2.7/Cherrypy web server on Win 7 which is connected to a VPN to establish a dedicate IP address. (If I run the exact same application on Win XP connected to the VPN it works fine.) On Win 7 I tried configuring it to use port 8080, 8005, or 80 with no improvements. I turned off Windows Firewall altogether to test and there was no improvement. If I run Apache on the Win 7 machine on port 80 it works fine so I'm pretty sure it's not the VPN service or router. If I go to WhatismyIP.com it shows that I have the IP address being provided by the VPN. Here is the Python code, but I suspect the problem is the network configuration: import cherrypy class HelloWorld: def index(self): return "Hello world!3" index.exposed = True cherrypy.root = HelloWorld() cherrypy.config.update({"global":{ "server.environment": "production", "server.socketPort": 8005 } }) cherrypy.server.start() This will return a web page if I go to localhost:8005, but not if I go to the VPN IP address:8005 from another machine. As I said, if I run Apache on the Win 7 machine on port 80 I can see it at localhost:80 AND at the VPN IP address:80 from another machine. Thanks for any light you can shed! Geoffrey

    Read the article

  • How can I recover an ext4 filesystem corrupted after a fsck?

    - by Regan
    I have an ext4 filesystem on luks over software raid5. The filesystem was operating "just fine" for several years when I was beginning to run out of space. I had a 9T volume on 6x2T drives. I began upgrading to 3T drives by doing the mdadm fail, remove, add, rebuild, repeat process until I had a larger array. I then grew the luks container, and then when I unmounted and tried to resize2fs I was given the message the filesystem was dirty and needed e2fsck. Without thinking I just did e2fsck -y /dev/mapper/candybox and it began spewing all kinds of inode being removed type messages (can't remember exactly) I killed e2fsck and tried to remount the filesystem to backup data I was concerned about. When trying to mount at this point I get: # mount /dev/mapper/candybox /candybox mount: wrong fs type, bad option, bad superblock on /dev/mapper/candybox, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so Looking back at my older logs I noticed the filesystem was giving this error each time the machine booted: kernel: [79137.275531] EXT4-fs (dm-2): warning: mounting fs with errors, running e2fsck is recommended So shame on me for not paying attention :( I then tried to mount using every backup superblock (one after another) and each attempt left this in my log: EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 0 failed (26534!=65440) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 1 failed (38021!=36729) EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 2 failed (18336!=39845) ... EXT4-fs (dm-2): ext4_check_descriptors: Checksum for group 11911 failed (28743!=44098) BUG: soft lockup - CPU#0 stuck for 23s! [mount:2939] Attempts to restart e2fsck results in: # e2fsck /dev/mapper/candybox e2fsck 1.41.14 (22-Dec-2010) e2fsck: Group descriptors look bad... trying backup blocks... candy: recovering journal e2fsck: unable to set superblock flags on candy At this point, I decided it best to order some more drives and make an image using ddrescue Now two weeks later I have an image of the luks partition in a .img file. # ls -lh total 14T -rw-r--r-- 1 root root 14T Oct 25 01:57 candybox.img -rw-r--r-- 1 root root 271 Oct 20 14:32 candybox.logfile After numerous attempts using everything I could find online I could not coerce e2fsck to do anything on the image, so I used mkfs.ext4 -L candy candybox.img -m 0 -S and I was able to mount the dirty filesystem readonly without the journal and recover 960G of data. It gave all kinds of errors of various directories not existing and so forth but I was able to get some stuff. Which gave me some hope! I then ran e2fsck again and it had to recreate the root inode and gave a massive list of correcting group counts, I accepted the root inode creation and said no to everything else, leaving a completely empty filesystem. Re-ran again and said yes to all questions with the same result but now a "clean" but empty filesystem. extundelete gives me 0 recoverable inodes found. And now I'm stuck again, I can't come up with any other methods other than dropping to something like photorec which will give me an absolute mess with how large the filesystem was. I'm willing to re-copy the image from the original array and start over, if I can get any suggestions or ideas on a way to get more of my files back. I wish I could give more detailed logs of the commands that have run, but the output is long scrolled passed except for what gets logged to syslog and my memory is not as detailed due to the timeframe this has occurred over. Any help is greatly appreciated!

    Read the article

  • Dell OMCI: Wacky values for Temperature and etc? (Win7x64)

    - by Yablargo
    Hey All. I am running a Dell Precision R5400 Workstation with dell OMCI installed. I am using it to test polling various data over WMI for our monitoring across the enterprise. I'm getting some weird results. perhaps someone can help point me in the direction of some clarification? Posted is the results of my DCIM\SYSMAN\DCIM_NumericSensor probe for sensor type 2(temp sensor) Microsoft (R) Windows Script Host Version 5.8 Copyright (C) Microsoft Corporation. All rights reserved. ----------------------------------- DCIM_NumericSensor instance ----------------------------------- Accuracy: AccuracyUnits: AdditionalAvailability: Availability: AvailableRequestedStates: BaseUnits: 2 Caption: CommunicationStatus: CreationClassName: DCIM_NumericSensor CurrentReading: -214748365 CurrentState: Unknown Description: DetailedStatus: DeviceID: Root/MainSystemChassis/TemperatureObj ElementName: Temperature Sensor:CPU0 EnabledDefault: 2 EnabledState: 2 EnabledThresholds: ErrorCleared: ErrorDescription: HealthState: 5 Hysteresis: IdentifyingDescriptions: InstallDate: IsLinear: LastErrorCode: LocationIndicator: LowerThresholdCritical: LowerThresholdFatal: LowerThresholdNonCritical: MaxQuiesceTime: MaxReadable: MinReadable: Name: NominalReading: NormalMax: NormalMin: OperatingStatus: OperationalStatus: 2 OtherEnabledState: OtherIdentifyingInfo: OtherSensorTypeDescription: PollingInterval: PossibleStates: Unknown,Normal,Fatal,Lower Non-Critical,Upper Non-Critical,Lower Critical,Upper Critical PowerManagementCapabilities: PowerManagementSupported: PowerOnHours: PrimaryStatus: ProgrammaticAccuracy: RateUnits: 0 RequestedState: 12 Resolution: SensorType: 2 SettableThresholds: Status: StatusDescriptions: StatusInfo: SupportedThresholds: SystemCreationClassName: DCIM_ComputerSystem SystemName: dt:5Q7BKK1 TimeOfLastStateChange: Tolerance: TotalPowerOnHours: TransitioningToState: 12 UnitModifier: 0 UpperThresholdCritical: UpperThresholdFatal: UpperThresholdNonCritical: ValueFormulation: 2 I'm not really sure whats going on, but note the CurrentReading: -214748365. I have reinstalled OMCI a few times, installed the OMCI 7x compatability and same thing I consistently get that error. It almost looks like its a issue between 32/64 bit value or something? Do I have to convert it to a float ? :)

    Read the article

  • Touchpad does not respond when I am holding key on the keyboard

    - by Tadeck
    I am experiencing strange problem with using my touchpad and keyboard simultaneously under Windows 7. I have HP tx2550ew (convertible tablet), and when I hold some key under Windows 7 (eg. space, a, s etc.), the touchpad seems to be blocking. I spotted this while playing Counter Strike. I am not playing much games, and I haven't been playing CS since January, so I am not sure when it started behaving like that. I have tested it also outside the game - when I hold space (eg. when on some web page and entering text into input field) or some letter key, the cursor is not able to move. The problem seems to not be occuring when I hold Shift, Ctrl nor Alt. Did any of you experience similar problem? Do you know what may have caused this? Is there any way I could check what is wrong with my laptop? I have been looking for a solution, but it seems I haven't been looking in right places. This is why I ask question here. Ps. I am unable to test whether this is touchpad-specific, because I have no mouse at my disposal at the moment (got used to touchpad so much I even find it more efficient and haven't been using a mouse with my laptop for months).

    Read the article

  • Compressing and copying large files on Windows Server?

    - by Aaron
    I've been having a hard time copying large database backups from the database server to a test box at another site. I'm open to any ideas that would help me get this database moved without having to resort to a USB hard drive and the mail. The database server is running Windows Server 2003 R2 Enterprise, 16 GB of RAM and two quad-core 3.0 GHz Xeon X5450s. Files are SQL Server 2005 backup files between 100 GB and 250 GB. The pipe is not the fastest and SQL Server backup files typically compress down to 10-40% of the original, so it made sense to me to compress the files first. I've tried a number of methods, including: gzip 1.2.4 (UnxUtils) and 1.3.12 (GnuWin) bzip2 1.0.1 (UnxUtils) and 1.0.5 (Cygwin) WinRAR 3.90 7-Zip 4.65 (7za.exe) I've attempted to use WinRAR and 7-Zip options for splitting into multiple segments. 7za.exe has worked well for me for database backups on another server, which has ~50 GB backups. I've also tried splitting the .BAK file first with various utilities and compressing the resulting segments. No joy with that approach either- no matter the tool I've tried, it ends up butting against the size of the file. Especially frustrating is that I've transferred files of similar size on Unix boxes without problems using rsync+ssh. Installing an SSH server is not an option for the situation I'm in, unfortunately. For example, this is how 7-Zip dies: H:\dbatmp>7za.exe a -t7z -v250m -mx3 h:\dbatmp\zip\db-20100419_1228.7z h:\dbatmp\db-20100419_1228.bak 7-Zip (A) 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Scanning Creating archive h:\dbatmp\zip\db-20100419_1228.7z Compressing db-20100419_1228.bak System error: Unspecified error

    Read the article

  • Upstart Script on Centos 6

    - by MarcusMaximus
    I'm trying to create an upstart script to run a python script on startup. In theory it looks simple enough but I just can't seem to get it to work. I'm using a skeleton script I found here and altered. description "Used to start python script as a service" author "Me <[email protected]>" # Stanzas # # Stanzas control when and how a process is started and stopped # See a list of stanzas here: http://upstart.ubuntu.com/wiki/Stanzas#respawn # When to start the service start on runlevel [2345] # When to stop the service stop on runlevel [016] # Automatically restart process if crashed respawn # Essentially lets upstart know the process will detach itself to the background expect fork # Start the process script exec su nonrootuser -c "python /usr/local/scripts/script.py" end script The test script I want it to run is currently a simple python script that runs without any issue when run from a terminal. #!/usr/bin/python2 import os, sys, time if __name__ == "__main__": for i in range (10000): message = "shotgunUpstartTest " , i , time.asctime() , " - Username: " , os.getenv("USERNAME") #print message time.sleep(60) out = open("/var/log/scripts/scriptlogfile", "a") print >> out, message out.close() The location/var/log/scripts has permissions 777 The file /usr/local/scripts/script.py has permissions 775 The upstart script /etc/init.d/pythonupstart.conf has permissions 755

    Read the article

  • Unable to mount root fs over NFS [on hold]

    - by johnmadrak
    I am attempting to set up a Raspberry Pi running Pidora to boot from an NFS share. My configuration in cmdline.txt is: dwc_otg.lpm_enable=0 console=ttyAMA0,115200 console=tty1 root=/dev/nfs nfsroot=<serverip>:/fake/path,nfsvers=3,rw,nolock nfsrootdebug ip=dhcp elevator=deadline rootwait On the Pi, the output I see is: IP-Config: Got DHCP answer from <router>, my address is <clientip> IP-Config: Complete: device=eth0, hwaddr=<macaddress>, ipaddr=<clientip>, mask=255.255.255.0, gw=<routerip> host=<clientip>, domain=, nis-domain=(none) bootserver=<routerip>, rootserver=<serverip>, rootpath= nameserver0=<routerip> (It pauses for a bit here) VFS: Unable to mount root fs via NFS, trying floppy VFS: Cannot open root device "nfs" or unknown-block(2,0); error -6 Please append a correct "root=" boot option; here are the available partitions: ..... On the NFS Server (an OpenVZ Container), the output I see in the /var/log/messages is: Aug 22 23:24:01 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:783 for /fake/path (/fake/path) Aug 22 23:24:38 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:741 for /fake/path (/fake/path) Aug 22 23:25:25 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:752 for /fake/path (/fake/path) Aug 22 23:26:12 vps-4178 rpc.mountd[928]: authenticated mount request from <clientip>:876 for /fake/path (/fake/path) To test, I've made sure I can mount (non-root) from both the Pi and another machine and it worked. Does anyone have an idea on what could be wrong or how to narrow it down? Thank you in advanced for your help.

    Read the article

  • Roundcube "Server Error (OK!)": Lists no messages but can get messages according to the log file

    - by thonixx
    In my server setup there are three virtual machines. One windows machine, an Ubuntu Server 11.10 and a Debian Squeeze mailserver. On the Ubuntu system I have Roundcube installed and I want to connect to the virtual mail server. What's the problem After login into Roundcube it says "Server Error (OK!)" and lists no messages. More information On the Ubuntu server there is no error in any log file (even Roundcubes log files). In the imap log file there you can see Roundcube is able to fetch all imap messages (I can see them in the imap log file created by Roundcube). And on the side of the mail server there are no error messages too. The test connection at the end of the configuration of Roundcube works too, there is a "success" notification. Even the basic login at Roundcube login dialog works without any error message. Roundcube log file you can look here for the log file: http://fixee.org/paste/wxg36eh/ So does anyone know what's wrong with Roundcube?

    Read the article

  • Windows 7 caches FTP credentials?

    - by Martin Booka Weser
    On my remote maschine i have an iis 7.5 (win server 2008) and set up an ftp site with iis manager authentication. I then did active directory user isolation and isolated my users to physical folders according to their names. So far, so good. I can access with ftp cliens from everywhere with different test accounts that i previously set up in the iis manager auth. Every user connects to its own folder. When i now tested with windows 7 as a client i did the following. Explorer - computer - right click - add network address - the ip of my remote maschine - user1 - password1 Perfect - it works. I now want to connect with user2. So I deleted this network address and set up a new connection, but with user2 (or even anonymous) instead. Now the strange thing: Windows doesn't even ask me for a password again. It just connects me to the folder of the user1. I already disabled ftp caching in the IIS and i disabled the user1 account in IIS manager authentication! Still, if i set up a network connection with this windows 7 it connects to the folder user1 . No matter which username i use (anonymous, administrator, user2,...). And if i connect with other ftp clients or other computers it all works perfectly. So I assume that this one windows somehow caches the credentials... But then, why does the IIS still accepts this credentials even if i disabled this user1 account??? Thanks.

    Read the article

  • Performance tweaks and upgrades for VMWare Server 2

    - by sjohnston
    Our software department has a server running VMWare Server 2. We typically have 8-10 VMs running as test environments (Win XP and Server 08) for various versions of our software, and one VM that is used as a build server (Win XP). The host is running Server 2003 R2. It has 32GB RAM, 8 core Xeon 3.16GHz CPU, one disk for host OS and two raid disks for VMs. The majority of the time, this setup behaves very well and there are no complaints. Other times, the VMs can be very laggy. This is sometimes, but not always, correlated to heavy load on the build server. I'm a software developer, not an IT pro, but it seems to me that this machine should be beefy enough to handle this many VMs. Is this occasional performance hit likely just because we're hitting the limits of the hardware, or should I be looking for another culprit? From what I've read, I'm guessing if there's a bottleneck, it's probably disk I/O with all these VMs running off two disks (especially the build server). Would spreading the VMs over more disks, and/or switching to SSDs give us a significant performance boost? Other things I've read may increase performance: single virtual processor per VM removing/disabling unused virtual hardware preallocated disk space not using snapshots setting a reserved memory limit on the host and disabling VM memory swapping Can anyone confirm or deny if any of these improve performance? What other good tweaks have I missed?

    Read the article

  • Should I embed the sRGB color profile in JPEG files?

    - by basic6
    I have a large (growing) collection of scanned images. They are TIFF files, mostly 48 bit with the Adobe RGB color space. This color profile is integrated in the files. When such a file is opened in IrfanView (with plugins), it says (Image - Information) Adobe RGB 1998. "Normal images", like the JPG files from a digital camera, do not (necessarily) have a color profile integrated in the file. I understand that it's necessary to include the Adobe RGB profile in an image file which uses the Adobe RGB space, so the color values can be interpreted correctly. Here's a test image with a completely different color profile, programs that ignore the included profile (like MSIE8 or Gwenview) will render it as sRGB (?): I'm planning to convert my TIF files to JPG, so I'm wondering if there's anything wrong with using IrfanView that would save them as sRGB without embedding the sRGB profile. I've heard that images should always be saved with the color profile included. Since every image seems to be interpreted as sRGB by default (by software without color management), I don't understand why the sRGB profile should be included?

    Read the article

  • xcopy Not Surpressing File/Directory Query

    - by Daniel Bingham
    Hey folks, I'm attempting to use xcopy to copy over a file from one machine to another on our network as part of a Java program. I'm calling xcopy like this: xcopy "C:\Program Files\path\to\my\file" "\\othermachine\c$\Documents and Settings\<myUserName>\Desktop\Test\path\in\directory\structure\to\file" /e /y /i Because I'm calling it from with in Java, I need all the prompts to be suppressed. For the most part, \i and \y have done exactly that. However, for this one file /i fails and I get the file or directory prompt. The result is that it hangs the entire program. I've also tried calling it with /s /t /q appended on to the existing options, to no avail. Why isn't /i working to suppress the File or Directory prompt? Is there an order I need to call the options in? Is there something else I need to do? EDIT: I should mention, the file is a text file - single line of text. It does not have an extension. It looks like this: FILE-NAME

    Read the article

  • Windows Vista Wrong Certificate With SNI

    - by JamesArmes
    I'm setting up SNI on an apache server and I thought things were going well. I have two URLs from different domains that point at the same site. I have one virtual host setup for each with the appropriate certificate for each. One of the certificates is valid but the other is self-signed (waiting on GoDaddy for the real cert). If I test the different URLs in Firefox, Safari and Opera all works well. I get no errors for the URL with the valid certificate and I get a self-signed warning for the other. However, in Internet Explorer 8 and Google Chrome, both URLs return the valid certificate (even if its not valid for the specific site). So for the one site, I get a valid certificate. For the other, I get a warning about the cert being for a different site. I tried switching the order of the vhosts and it made no difference. I know that Chrome and IE both use Window's HTTP stack so I understand why the behavior is the same for the two. What I don't understand is why I'm seeing this behavior.

    Read the article

  • Advice on Computer Specs for overall development/general use machine

    - by Ender
    At the moment I am restricted to a laptop with 512MB of RAM, a 120GB HDD and a 1.5GHz Intel processor for all my development and general browsing needs, and as you can probably tell using it for anything modern is a painful experience. As a result I've decided to buy myself a new desktop computer, one that will stand the test of time and one that can be upgraded easily. Rather than build the machine myself I've decided to go through Dell as I've had good experiences with them when purchasing computers for my family. I've had my eye on this as it's got a good amount of RAM, has a decent-rated processor and isn't priced too badly. http://www1.euro.dell.com/uk/en/home/Desktops/inspiron-580/pd.aspx?refid=inspiron-580&s=dhs&cs=ukepp1&~oid=uk~en~20211~inspiron-580_d005827~~ Intel® Core™ i5 Processor 750 (2.66GHz, 8MB) Genuine Windows® 7 Home Premium 64bit - English Display Not Included ATI Radeon™ HD 5450 1GB DDR3 graphics 6144MB Dual Channel DDR3 [3x2048] Memory 1TB (7200rpm) SATA Hard Drive DVD +/- RW Drive (read/write CD & DVD) with DVD Burn software 1 year of coverage included with your PC McAfee® Security Centre - 15 Month Protection - English After the pain of using a slow laptop for all this time the main thing I want is speed. I may look to play a couple of basic games on it, nothing too powerful. Obviously I'll be doing some development on it too so it'll have to be able to handle the latest IDE's and Database tools like SQL Server pretty quickly. Finally, should I ever need to improve it I'd like to be able to add more RAM and change some of the parts. I wouldn't have thought this would be a problem but a few people I've spoken to have said that the amount of RAM the motherboard can handle isn't that great. Is this true? How long can I expect to be using this computer before it's too slow? Thanks in advance for the help.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • SQL 2008 R2 replication error: The process could not connect to Distributor

    - by Lance Lefebure
    I have two servers running SQL 2008 R2 Standard, each with an instance named "MAIN". I have a small test database on my primary server (one table, 13 rows) that I want to replicate to a second server as a proof-of-concept for some larger databases that I want to replicate. I set up the primary server to be a publisher and distributor, and set the database to do transactional replication. I copied the data to the second server via a backup/restore, not via a snapshot (which I'll have to do with the larger databases due to database size and limited bandwidth). I followed the instructions here: http://gnawgnu.blogspot.com/2009/11/sql-2008-transactional-replication-and.html Now on the subscriber, I go under Replication / Local Subscriptions / Right click / Properties on my subscription to the DB. The status of the last synchronization shows a status of: "The process could not connect to Distributor 'PRIMARYSERVER\MAIN'." Data IS replicating from the primary to the secondary. Any record I add on the primary shows up on the secondary server within seconds. Is the Distributor part of the Snapshot system that I'm not using, or is it part of the transaction replication stuff? Thanks, Lance

    Read the article

  • How can I create a VLAN on my extreme switch for a separate subnet/domain?

    - by drpcken
    I'm putting together a small active directory implementation for a buddy of mine. I currently have 2 servers (one is the primary domain controller) and a couple clients. I need to test and run updates on every machine on this domain, but I would have plug them into my current LIVE domain to get it internet access. From what I've read having two separate domains on a single subnet is a bad idea (even though it is temporary) so I don't want to risk messing anything up on my production domain. I'm pretty sure I can create a separate VLAN on my extreme 48 port switch and plug this smaller domain into it on a different subnet, but I don't know the commands. Both subnets would need internet access of course (one of the things I can't wrap my head around is routing internet traffic between subnets (gateway is on production subnet). Switch is a Summit x450e-48p My production domain is on subnet 192.168.200.0. My new domain I want to put online would go into subnet 192.168.10.0. A shove in the right direction would be greatly appreciated. Thank you!

    Read the article

  • IIS not responding with SSL Server Hello

    - by Damien_The_Unbeliever
    I'm having difficulty getting a particular IIS machine to "do" SSL. This is a test environment (one of many) which we've set up "the same" as we have many times previously, but it just doesn't seem to be working. The server is Windows Server 2003 (Version 5.2 (Build 3790.srv03_sp2_gdr.100216-1301 : Service Pack 2)) IIS is hosting 4 sites (including the default site), but only one site is configured for SSL. We're using the same SSL certificate we use on all of our other servers (it's a wildcard cert). (Don't think this is relevant, but including anyway) We've configured the site to require SSL, it has a subdirectory that doesn't require SSL but has an asp page that redirects into SSL. The 403;4 error page for the site is hooked up to this asp page (this is how we do our non-HTTPS into HTTPS transition). Using Microsoft Network Monitor (3.3), I've just run a session against a server where SSL is working. It can pull apart the SSL exchange as the following messages: SSL: Client Hello SSL: Server Hello. Certificate. Server Hello Done SSL: Client Key Exchange. Change Cipher Spec. Encrypted Handshake Message. SSL: Change Cipher Spec. Encrypted Handshake Message However, on our problem server, I only see: SSL: Client Hello. The next packet from the server (from port 443, so it's listening and responding there) contains only 60 bytes, and just seems to have the Tcp headers and not much else (but I'm by no means an expert at deciphering these things). So, where do I look next? Or what additional information do I need to add to this question, and where do I find it?

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • Ubuntu 10.10 - PC shutdown before boot shortly after BIOS loads

    - by clem
    Since installing Ubuntu 10.10 from Karmic I've started getting problems with starting up the PC. I've done a complete wipe (Boot and Nuke) of the hard drive and reinstalled Ubuntu 10.10 but the problem still occurs. There is no dual boot on the PC, just Ubuntu. Here is the problem: Each morning, when I turn the PC on from being off overnight, the PC starts up and loads the BIOS. I get the following message Verifying DMI Pool Data... K8 NPT Data Change...Update New Data to DMI!....... Then poof the computer shuts off. However, after switching the computer back on around 6 or 7 times after it's turned itself off, it will eventually boot up without any problem. Also, once up and running for a while, I can shutdown and restart the PC first time, without any issues. I have also noticed a problem with the USB mouse being recognised and once I finally get the computer booted up, I need to unplug and then plug the mouse back in to get it working. I've opened the PC up and checked the connections (cables, cards and memory) and it all seems fine. The main issue with troubleshooting this problem is I cannot test any suggestions or fixes until the next morning because once the computer is up and running it will remain so! I do not leave the computer on overnight to save energy. So.. Is this a hardware / boot software issue? This is a very odd problem and I have googled to no avail. Any suggestions?

    Read the article

  • Nginx + WordPress + HHVM: Why isn't Batcache working? Would Varnish help even more?

    - by javipas
    I've heard great things about HHVM, so I've setup a copy of WordPress blog (on another domain) with Nginx (with the Pagespeed module) and HHVM. Right now the benefits are obvious: on the same config, load times are between two and three times faster. I'm trying to speed up things a little bit, and I've also installed Memcached and Batcache. I've installed the memcached package, copied object-cache.php (Pastebin) onto the root folder of the WordPress blog, and after that I've installed the Batcache plugin and copied the advanced-cache.php (Pastebin) file onto the wp-content folder. Also, I've included the line define('WP_CACHE', true); in the wp-config.php file. It seems it doesn't work, though. If I quickly reload the page several times Batcache should show the cached page, but it doesn't. It's easy to check that by reloading (Cmd+R on Chrome on OS X) the page several times and then viewing the page's code. Under the <head> section I should see some batcache stats, but they aren't there. I wonder if someone could give me some hint on this. On a side note, I don't know if I could add some other component in order to help the performance be even better. I'm thing about Varnish, but I'm not sure if it's just useless and it's just another way to the same I'm currently doing. Any other component there? (I'll test CDN for images, minifying js, etc and some other tricks as well, but I'm talking from the server perspective).

    Read the article

< Previous Page | 906 907 908 909 910 911 912 913 914 915 916 917  | Next Page >