Search Results

Search found 25400 results on 1016 pages for 'enable manual correct'.

Page 763/1016 | < Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >

  • Iptables config breaks Java + Elastic Search communication

    - by Agustin Lopez
    I am trying to set up a firewall for a server hosting a java app and ES. Both are on the same server and communicate to each other. The problem I am having is that my firewall configuration prevents java from connecting to ES. Not sure why really.... I have tried lot of stuff like opening the port range 9200:9400 to the server ip without any luck but from what I know all communication inside the server should be allowed with this configuration. The idea is that ES should not be accessible from outside but it should be accessible from this java app and ES uses the port range 9200:9400. This is my iptables script: echo -e Deleting rules for INPUT chain iptables -F INPUT echo -e Deleting rules for OUTPUT chain iptables -F OUTPUT echo -e Deleting rules for FORWARD chain iptables -F FORWARD echo -e Setting by default the drop policy on each chain iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD DROP echo -e Open all ports from/to localhost iptables -A INPUT -i lo -j ACCEPT echo -e Open SSH port 22 with brute force security iptables -A INPUT -p tcp -m tcp --dport 22 -m state --state NEW -m recent --set --name SSH --rsource iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 4 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --rcheck --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j LOG --log-prefix "SSH brute force " iptables -A INPUT -p tcp -m tcp --dport 22 -m recent --update --seconds 30 --hitcount 3 --rttl --name SSH --rsource -j REJECT --reject-with tcp-reset iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT echo -e Open NGINX port 80 iptables -A INPUT -p tcp --dport 80 -j ACCEPT echo -e Open NGINX SSL port 443 iptables -A INPUT -p tcp --dport 443 -j ACCEPT echo -e Enable DNS iptables -A INPUT -p tcp -m tcp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT iptables -A INPUT -p udp -m udp --sport 53 --dport 1024:65535 -m state --state ESTABLISHED -j ACCEPT And I get this in the java app when this config is in place: org.elasticsearch.cluster.block.ClusterBlockException: blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master]; at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:292) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1185) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:537) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:475) at org.springframework.beans.factory.support.AbstractBeanFactory$1.getObject(AbstractBeanFactory.java:304) at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:300) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:700) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:403) Do any of you see any problem with this configuration and ES? Thanks in advance

    Read the article

  • Htpc aka "Media Center": cheap and *silent*?

    - by Unknown
    It may be me, or the place I live (Italy), but it seems pretty hard to get a build or a prebuilt nettop or a laptop that fits the need. I need something silent able to playback all h.264 fullhd content without stuttering, and well (and not loosing the hw acceleration because of softsubs...) silent not ugly silent and (possibly) cheap. I'm going the linux route, therefore i'm moving towards a cpu-based or nvida-integrated solution (i don't think ati hw accelerated playback - or the intel "hd" acceleration - is useable yet). Ion nettop; it's either the Acer Revo (but here it's incredibly pricey and it's hard to find the dualcore version) or the Asrock Ion 330, that in the current version is rated "silent" at 26Db. 26. Sounds pretty noisy to me!!! the previous version was even worse. was this product really aimed at htpc market?? the Dell Zino - i think it's ATI based unfortunately. Laptop: correct me if I'm wrong: sub 600€/$ units are quite loud under full load (because of the tiny fans). ULW laptops are indeed quite similar: tiniest fans = high pitched noise and the cpu still lacks power for non hd-accelerated video decoding Handmade build: little money can be saved with underpowered cpus, a low-midrange cpu would help in the case of non-hw-accelerated content the cases are quite pricey the PSU one has to get ranges between 100/150 €/$ minimum to keep the noise down a low-mid build, all included, sums up to over 650 €/$ for a still-looking-ugly-unit, without the blu-ray drive. Please help. What do you advise on this? ;) Am I ignoring laptops too much, maybe? Are low-priced Acers that noisy/high pitched under full load?

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • Can't boot flash drive on GIGABYTE motherboard

    - by Deltik
    Situation When I try to boot from my flash drive, my GIGABYTE 970A-UD3 motherboard returns this: Loading Operating System ... Boot error All other motherboards I've tried support booting from that flash drive (and a backup flash drive). The operating systems I tried on both flash drives were created with usb-creator-gtk (Ubuntu USB Startup Disk Creator). I know that the motherboard understands that there is an operating system on the flash drives because when I erase them, it complains in an ALL CAPS RAGE that there isn't an operating system, which is correct. How can I boot a flash drive that's bootable from other motherboards on this motherboard? Qualification This question is not a duplicate of this one because directly writing to the flash drive as an ISO 9660 (dd if=operating_system.iso of=/dev/sdb) still does not have the motherboard recognize the operating system. This question should be a duplicate of this one because I provide more information not provided by that poster. This forum thread has broken links and does not have a solution to my problem. Nobody knows what's going on in this forum thread.

    Read the article

  • Integration of SharePoint 2010 with TFS2010

    - by Kabir Rao
    We have performed following steps as of now- Install TFS2010 10.0.30319.1 (RTM) on Windows Server 2008 R2 Enterprise(app tier) SQL 2008 SP1 with Cumulative update 2 on Windows Server 2008 R2 Enterprise(data tier) Reporting Service is installed on app tier. After this installation worked fine we installed SharePoint 2010 on app tier. After installation we followed http://blogs.msdn.com/b/team_foundation/archive/2010/03/06/configuring-sharepoint-server-2010-beta-for-dashboard-compatibility-with-tfs-2010-beta2-rc.aspx for configuration. We are not able to perform the last step described in the link as following error occured- TF249063: The following Web service is not available: http://apptier:31254/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. The underlying error is: The remote server returned an error: (404) Not Found.. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://apptier:31254. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. We have also noticed that Document Folder in Team project also have red x. Please help. Thanks upfront.

    Read the article

  • Cannot run a VM with more than three network interfaces with KVM

    - by Bostonvaulter
    I'm running KVM on top of Ubuntu 10.10 Server I can create VM's (Virtual Machine) and network interfaces fine but I cannot seem to add more than three network interfaces. As soon as I have a VM with four network interfaces it gets stuck on startup at the starting SeaBIOS page with this message: Starting SeaBIOS (version pre-0.6.1-20100702_143500-palmer) So far I've verified this with two VM's, a Ubuntu 10.10 desktop and a Vyatta router. The specific network hardware I assign to the VM's doesn't seem to matter. I'm trying to have one bridged interface and three private networks using Vyatta to route between them. Does anyone know why I can't run a VM with more than three network interfaces? Edit: Additionally the KVM thread responsible for the specific VM hangs using ~100% CPU (i.e. one core). Here's the command for the process that is hanging: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name vyatta -uuid 6dff7c94-6810-423e-5fea-fec10da0e9b7 -nodefaults -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/vyatta.monitor,server,nowait -mon chardev=monitor,mode=readline -rtc base=utc -boot c -drive file=/home/rams/virtual-machines/vyatta.img,if=none,id=drive-ide0-0-0,boot=on,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device rtl8139,vlan=0,id=net0,mac=00:54:00:be:cc:4b,bus=pci.0,addr=0x3 -net tap,fd=97,vlan=0,name=hostnet0 -device rtl8139,vlan=1,id=net1,mac=52:54:00:da:59:ed,bus=pci.0,addr=0x5 -net tap,fd=98,vlan=1,name=hostnet1 -device rtl8139,vlan=2,id=net2,mac=52:54:00:ce:22:b6,bus=pci.0,addr=0x6 -net tap,fd=99,vlan=2,name=hostnet2 -device rtl8139,vlan=3,id=net3,mac=52:54:00:1e:bc:46,bus=pci.0,addr=0x7 -net tap,fd=101,vlan=3,name=hostnet3 -chardev pty,id=serial0 -device isa-serial,chardev=serial0 -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 Edit: I've also found an error in dmesg that might be related (it also shows up when running virtd in verbose mode): 14:47:24.399: warning : qemudParsePCIDeviceStrs:1422 : Unexpected exit status '1', qemu probably failed I've also tried disabling app armor but that doesn't seem to make a difference.

    Read the article

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • Server taking too long to respond error

    - by DCJones
    This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 </IfModule> <IfModule worker.c> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule>

    Read the article

  • Weird permission issue with POSIX ACLs, NFS v3 on Linux

    - by jon
    I have two Linux systems, both running Debian Squeeze. Versions of (I think) the stuff involved are: kernel: 2.6.32-5-xen-amd64 ii nfs-kernel-server 1:1.2.2-4squeeze2 support for NFS kernel server ii libnfsidmap2 0.23-2 An nfs idmapping library ii nfs-common 1:1.2.2-4squeeze2 NFS support files common to client and server ii portmap 6.0.0-2 RPC port mapper (The client doesn't have nfs-kernel-server involved.) I have a directory with ACLs: # file: dirname # owner: jon # group: foogroup # flags: -s- user::rwx user:www-data:rwx group::r-x group:foogroup:rwx mask::rwx other::r-x default:... There are two users, neither one of which owns the directory: uid=3001(jake) gid=3001(jake) groups=3001(jake),104(wheel),3999(foogroup) uid=3005(nic) gid=3005(nic) groups=3005(nic),3999(foogroup) The jake user can create files in the directory without issues. The nic user can't. All UIDs/GIDs are the same on the client and server. I've verified (packet sniffing) that the right uids/gids get sent via AUTH_UNIX are correct-- uid=gid=3005, auxiliary gids=3005,3999-- and that the server replies with NFS3ERR_ACCESS, which the kernel on the client maps to EACCES (Permission denied). Can anyone help me here?

    Read the article

  • Nginx with PAM authentication through pam_script

    - by Envek
    Have anyone set up such a configuration? It's not work for me. So, I've installed nginx-extras on Ubuntu 12.04 (it's built with PAM module), and write to site config: location ^~ /restricted_place/ { auth_pam "Please specify login and password from main_site"; auth_pam_service_name "nginx"; } Afterwards, in /etc/pam.d/nginx: auth required pam_script.so dir=/path/to/my/auth_scripts And wrote simplest /path/to/my/auth_scripts/pam_script_auth (also I've tried to write complicated scripts) #!/bin/sh exit 0 # should allow anyone Doesn't work. The script is launched (I've wrote full functional script, that successfully executes, check credentials, writes to its own log and returns correct exit code, and executes noticeably long). But no access granted. Only rejected. In /var/log/nginx/error.log appears next record: 2012/09/13 10:44:42 [alert] 1666#0: waitpid() failed (10: No child processes) If I'm specify in /etc/pam.d/nginx: auth required pam_unix.so and grant for www-data user right to read /etc/shadow, unix authorization works fine. But script auth doesn't work. Can't understand, where is trouble. In nginx module, or in pam_script module.

    Read the article

  • SSH hangs without password prompt

    - by Wilco
    Just reinstalled OS X and for some reason I now cannot connect to a specific machine on my local network via SSH. I can SSH to other machines on the network without any problems, and other machines can SSH to the problematic one as well. I'm not sure where to start looking for problems - can anyone point me in the right direction? Here's a dump of a connection attempt: OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /etc/ssh_config debug1: Connecting to 10.0.1.7 [10.0.1.7] port 22. debug1: Connection established. debug1: identity file /Users/nwilliams/.ssh/identity type -1 debug1: identity file /Users/nwilliams/.ssh/id_rsa type -1 debug1: identity file /Users/nwilliams/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.5 debug1: match: OpenSSH_4.5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host '10.0.1.7' is known and matches the RSA host key. debug1: Found key in /Users/nwilliams/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password,keyboard-interactive debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic ... at this point it hangs for quite a while, and then resumes ... debug1: Unspecified GSS failure. Minor code may provide more information Server not found in Kerberos database debug1: Unspecified GSS failure. Minor code may provide more information Server not found in Kerberos database debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Trying private key: /Users/nwilliams/.ssh/identity debug1: Trying private key: /Users/nwilliams/.ssh/id_rsa debug1: Trying private key: /Users/nwilliams/.ssh/id_dsa debug1: Next authentication method: keyboard-interactive

    Read the article

  • Cannot push to GitHub from Amazon EC2 Linux instance

    - by Eli
    Having the worst luck push files to a repo from EC2 to GitHub. I have my ssh key setup and added to Github. Here are the results of ssh -v [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0g-fips 18 Jan 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to github.com [207.97.227.239] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /root/.ssh/identity type -1 debug1: identity file /root/.ssh/id_rsa type 1 debug1: identity file /root/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5github2 debug1: match: OpenSSH_5.1p1 Debian-5github2 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'github.com' is known and matches the RSA host key. debug1: Found key in /root/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /root/.ssh/identity debug1: Offering public key: /root/.ssh/id_rsa debug1: Remote: Forced command: gerve eliperelman 81:5f:8a:b2:42:6d:4e:8c:2d:ba:9a:8a:2b:9e:1a:90 debug1: Remote: Port forwarding disabled. debug1: Remote: X11 forwarding disabled. debug1: Remote: Agent forwarding disabled. debug1: Remote: Pty allocation disabled. debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Trying private key: /root/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey).

    Read the article

  • Forward differing hostnames to different internal IPs through NAT router

    - by abrereton
    Hi, I have one public IP address, one router and multiple servers behind the router. I would like to forward differing domains (All using HTTP) through the router to different servers. For example: example1.com => 192.168.0.110 example2.com => 192.168.0.120 foo.example2.com => 192.168.0.130 bar.example2.com => 192.168.0.140 I understand that this could be accomplished using Port Forwarding, but I need all hosts running on port 80. I found some information about IP Masquerading, but I found this difficult to understand, and I am not sure if it is what I am after. Another solution I have found is to direct all traffic to Reverse Proxy server, which forwards the requests onto the appropriate server. What about iptables? I am using a Billion 7404 VNPX router. Is there a feature that this router has that can accomplish this? Are these my only options? Have I missed something completely? Is one recommended over the others? I have searched around but I don't think I am hitting the correct keywords. Thanks in advance.

    Read the article

  • atl90.dll version 9.0.30729.4148 is missing in WinSxS folder

    - by mkva
    I have the following problem: when starting Visual Studio 2008, it says "Cannot find one or more components. Please reinstall the application." and stops. With the help of Sysinternals ProcessMonitor, I found out that Visual Studio could not load the atl90.dll 9.0.30729.4148 from the WinSxS folder. I tried to manually copy the older atl90.dll 9.0.30729.1 with the result that Visual Studio works again. Now I call this a dirty workaround, and not a solution. Plus I still don't know the reason why the atl90.dll disappeared in the first place. So my questions: - Does anyone know of a reason why this might have happened? - Does anyone know a real solution to the problem, e.g. a Microsoft download that includes the atl90.dll in the correct version 9.0.30729.4148 that installs into WinSxS? Some details: - WinXp SP3 - missing DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.4148_x-ww_353599c2\atl90.dll - workaround DLL: C:\WINNT\WinSxS\x86_Microsoft.VC90.ATL_1fc8b3b9a1e18e3b_9.0.30729.1_x-ww_d01483b2\atl90.dll - manifests in WinSxS seem to be alright, but unfortunately all point to the missing version 9.0.30729.4148 Thanks, Markus

    Read the article

  • Why do I have untrusted certificates for Google, Yahoo, Mozilla and others?

    - by jackweirdy
    In the HTTPS/SSL section of chrome://chrome/settings, I see the following: What does this mean, and is there something wrong? I have a basic understanding of SSL/TLS - I'm not claiming to be completely familiar, but I'm fairly confident I know my way around it - but I don't understand why I have certificates installed on my machine specifically for these sites. From my understanding, I should have the certificates for Certificate Authorities, and any site I visit and use SSL/TLS should have a certificate signed by one of these trusted CAs for me to trust the site. My worry is that if someone has maliciously installed a certificate for these sites on my machine, they could perform a DNS spoofing attack (or a number of other attacks) to hijack my connection to my email account without me knowing, and as they've got the private counterpart to the certificate on my machine, decrypt the communication. NB: I'm also aware that CA certificates aren't just within Chromium and are used system wide as part of libssl - they're stored in /etc/ssl/certs. What I'd like to know is: Is this correct? - The big red boxes make me think no Is this malicious or benign? What can I do to resolve this problem? (If indeed it is a problem) Thanks :)

    Read the article

  • Black screen appears when booting new install of Ubuntu 11.10 on my desktop, cannot access Grub menu to fix

    - by izn
    I installed 11.10 on my desktop PC but get a black screen after the BIOS screen when I try to boot it. I was able to run 10.04.04 on my hard drive before installing 11.10 and I am also able to use 11.10 on my usb pendrive and CD ROM. I've tried unplugging all USB devices before booting and also upgrading from 11.10 to 11.10. Holding the shift key from the BIOS screen doesn't allow me to access the GRUB menu to try: Highlight the first entry, press “e” to edit it. Navigate to words “quiet splash”, delete them and type “nomodeset” in their place (without quotes). Press Ctrl + X to continue boot. Once on the desktop, go to System Administration Additional Drivers and activate the recommended drivers. So running 11.10 on my pendrive, I tried editing /etc/default/grub, commenting out the GRUB_HIDDEN_TIMEOUT setting by putting a '#' in front of it to display the grub menu and setting GRUB_TIMEOUT setting to a value greater than or equal to 1 e.g. GRUB_TIMEOUT=10. However, when I run sudo update-grub, I get: /usr/sbin/grub-probe: error: cannot find a device for / (is /dev mounted?) I get the same error with update-grub after: sudo mount /dev/sda1 /mnt and after: sudo grub-install --root-directory=/mnt /dev/sda reboot sudo update-grub Other suggestions to fix the update-grub problem: Open synaptic, then purge all the related grub installed packages and reinstall grub-pc then and finally: sudo update-grub Or use Grub Customizer http://ubuntuforums.org/showthread.php?t=1195275 What would be the best way to approach this? I'm concerned about purging "all the related grub installed packages" but if it's true some files are corrupted this would seem necessary. Also, was I executing the correct commands i.e. with mount and grub-install, before running grub-update?

    Read the article

  • Logic behind SCCM 2012 required PXE deployments

    - by Omnomnomnom
    I'm in the process of setting up Windows 7 deployment through PXE boot, with Microsoft SCCM 2012. The imaging itself works very well, but I have a question about the logic behind PXE deployments. My setup is the following: My Windows 7 deployment task sequence is deployed to the unknown computers group. (not required, press F12 to start installing) OSDComputerName variable is also set on the unknown computers group, so unknown computers that are being imaged will prompt for a pc name. The computer then becomes known in SCCM and is added to the correct collection(s). But if I want to reïnstall windows on a known computer things are different: I can do a required deployment of the imaging task sequence to the collection of computers. Then windows installs through PXE, without any human interaction, keeping the original computer name. But because the initial deployment was not required, the "required PXE deployment" flag is not set. So as soon as I add a new computer to a collection with a required PXE deployment, it will start to reïnstall windows again. I can also deploy the imaging task sequence to the new unknown computers as required, so the flag gets set initially. But then it does not prompt for a computer name. (and it generates a name like MININT-xxx) Which is also sort of what I want. Because when i want to re-install a machine, I want it to install without interaction. How can I solve this?

    Read the article

  • localhost won't load after adding config data to httpd

    - by OldWest
    I am not very experienced with configuring httpd, and I am following a tutorial to view my site w/ domain name under localhost. My localhost just blanks out and my apache services won't restart. I checked all of my paths and they are correct. I am editing the w*indows/system32/drivers/etc/host*s file and my apache httpd file. This is what I am putting in my hosts file: 127.0.0.1 www.cars_v1.0.com.localhost And in the footer of my httpd file I am putting this: <VirtualHost 127.0.0.1:80> ServerName www.cars_v1.0.com.localhost DocumentRoot "C:\wamp\www\symfony\cars_v1.0\web" DirectoryIndex index.php <Directory "C:\wamp\www\symfony\cars_v1.0\web"> AllowOverride All Allow from All </Directory> Alias /sf C:\wamp\www\symfony\cars_v1.0\lib\vendor\symfony-1.4.8\data\web\sf <Directory "C:\wamp\www\symfony\cars_v1.0\lib\vendor\symfony-1.4.8\data\web\sf"> AllowOverride All Allow from All </Directory> </VirtualHost>

    Read the article

  • Mail.app doesn't detect sender in Address Book

    - by CoreSandello
    Hi there. I don't understand, how does 'smart addresses' in Mail.app work. Recently I mentioned, that for some emails I don't see person's full name in 'From' column. I started to dig into this behavior and found out, that I have few contacts in my Address Book, that are not recognized by Mail.app. Here how it looks: I have a person in Address Book with filled email entry and filled first/last name (localized). I have an incoming email from that person (from email specified in Address Book), but first/last name in the email itself doesn't match with ones specified in Address Book (e. g. 'From' field in email looks like 'John [work] <[email protected]>' while Address Book entry is 'John Smith' (localized, in Russian)). And Mail.app doesn't recognize that this mail is originating from that person in Address Book: if I click on 'From' field, it suggests to me to add sender to Address Book, while for others' emails I have 'Show in Address Book' menu entry (especially for ones with full localized name in 'From' field). I'm wondering, is that behavior correct or I'm missing something? I'm using Snow Leopard & Mail 4.0; my system language set to English, if that matters. I'd like to have some clarifications on that Mail.app behavior: whenever it fixable or not (and if it's fixable, I'd like to see a fix). By the way, is it possible to match sender's address against Address Book entry in filter rules or not? That would be great, if I can create rules like 'move all mail from that person to that folder' without specifying exact source address. Thanks, Ivan.

    Read the article

  • Retail windows xp prof sp 2 Lost key but have the genuine cd , what to do?

    - by AdityaGameProgrammer
    I had recently formatted my system only to find out i have lost the cd key to my original cd. i had used the option to enter the product key later. Yes, i know its a stupid thing to do but i bought the cd in 2008 from a retail store and i lost the original packaging. the actual label on the cd is includes service pack version 2002 .@2004 microsoft corporation reserved. There are some numbers on the back side of the cd in the inner ring. i cant for the life of me figure out how what is the use of the genuine cd i have with me when i cant seem to activate it. what exactly is the advantage of having the original cd in your possession in situations like this?. i have tried the unattend.txt and it doesnt contain the correct key. and there does not exist any winnnt.sif file in the cd. where on the cd or in it can i find the product id information i stay in india . and my attempts at trying the microsoft support site keeps getting me directed to page which says they had stopped support for windows xp in 2011. lets say by some miracle i do contact microsoft. what information would i have to provide them? and would they be giving me the product key for my cd key from their database? or a new key?

    Read the article

  • mod_wsgi, .htaccess and rewriterule

    - by hadaraz
    I'm using several django projects running on the same apache instance through mod_wsgi, configured with virtualhost for each site, see the httpd.conf here. For one of the sites I want to use static-cache (staticgenerator), so I set up a directory with .htaccess file which contains: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME}/index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] where 3456 is the django port on the server. Using this rewrite rule, the request is always forwarded to the mod_wsgi handler, even if the file or directory exists, and if the file index.html exists the request shows as request-path/index.html. I tried another setup: RequestHeader unset X-Forwarded-Host RewriteEngine on RewriteBase / RewriteCond $1 !-d RewriteCond $1index.html !-f RewriteRule ^(.*) http://127.0.0.1:3456/$1 [P] but got almost the same results. All requests are transferred to the mod_wsgi handler, but the request path is now the original one. To sum it up: What is the correct RewriteCond to use here? How do you transfer a request to the mod_wsgi handler? Is it the right way? If that's not the way to do it, then how do you serve static files from a directory when they exist, and when they don't you serve from apache/mode_wsgi? Thanks for your help.

    Read the article

  • mod_proxy incorrect redirect behaviour

    - by Kevin Loney
    In chrome this configuration causes an infinite redirect loop and in every other browser I have tried a request for https://www.example.com/servlet/foo is resulting in a redirect to https://www.example.com/foo/ instead of https://www.example.com/servlet/foo/ however this only occurs when I do not include a trailing / at the end of the request url (i.e. http://www.flightboard.net/servlet/foo/ works just fine). <VirtualHost *:80> # ... RewriteEngine On RewriteCond %{HTTPS} off RewriteCond %{REQUEST_URI} ^/servlet(/.*)?$ RewriteRule ^(.*)$ http://%{HTTP_HOST}$1 [R=301,L] </VirtualHost> <VirtualHost *:443> # ... ProxyPass /servlet/ ajp://localhost:8009/ ProxyPassReverse /servlet/ ajp://localhost:8009/ </VirtualHost> The virtual host on port 443 has no rewrite rules that could possibly causing the problem, the tomcat contexts being referenced do not send any redirects, and if I change the ProxyPass and ProxyPassReverse directives to: ProxyPass / ajp://localhost:8009/ ProxyPassReverse / ajp://localhost:8009/ everything works fine (except for the fact everything from www.example.com is being passed to the proxy which is not the behaviour I want). I'm fairly certain this is a problem with the way I have my proxy settings configured because I did log all the rewrite output coming from apache and it was all correct.

    Read the article

  • Deploying a Git server in a AWS linux instance

    - by Leroux
    I'm making a git server on my linux instance in AWS. I tried doing it using these instructions but in the end I always get stuck with a "Permission denied (publickey)" message. So here is my detailed steps, the client is my windows machine running mysysgit and the server is the AWS ubuntu instance : 1) I created user Git with a simple password. 2) Created the ssh directory in ~/.ssh 3) On the client I created ssh keys using ssh-keygen -t rsa -b 1024, they got dropped in my /Users/[Name]/.ssh directory, id_rsa and id_rsa.pub key pair was created. 4) Using notepad I copy pasted the text into newly created files on the server in the ~/.ssh directory of my Git user. ~/.ssh/id_rsa and **~/.ssh/id_rsa.pub** were copied. 5) On the server I made the authorized_hosts file using "cat id_rsa.pub authorized_hosts" (while inside the .ssh directory) 6) Now to test it, on my client machine I did ssh -v git@[ip.address] 7) Result : debug1: Host 'ip.address' is known and matches the RSA host key. debug1: Found key in /c/Users/[Name]/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /c/Users/[Name]/.ssh/identity debug1: Trying private key: /c/Users/[Name]/.ssh/id_rsa debug1: Offering public key: /c/Users/[Name]/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I would appreciate any insight anyone can give me.

    Read the article

  • Installing SQLServer 2005 on Windows 7 64bit

    - by Mostafa
    Hi , It's 3 days I'm trying to install SqlServer 2005 under Windows 7 64 bit on my computer. First let me tell you what I've done and what I've got till now . 1-I Installed Windows 7 64 Bit on my computer 2-I tried to install SQl Server 2005 "Developer Edition" 2.1 But in "System Configuration Check" Page i recieved 2 warning , One for "IIS Feature Requirement" and another for "ASP.NET Version Registration Rquired" . 2.1.1 . I installed "Internet Information Services" from "Turn Windows features on or off" section in control panel 2.1.2 I Enabled reporting service 32 bit from "Inetpub= AdminScripts = adsutil.vbs" 2.2 At this stage There was no waring in System Configuration Check 3- So I installed SQl Server 2005 Developer Edition By all default settings 4- I installed Sql Server 2005 Service Pack 3 64 bit Now when when i run "Management Studio" There is no name in "Server name" section . I typed my Computer name Or "." and i got this Error : A network -related instance-specific error occurred while establishinga connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider , error :40 - Could not open a connection to SQL Server ) ( Microsoft SQL Server , Error :2) . I googled some for this Error and some people said follow this instruction: Startsql server 2005Configuration toolsSql Server Surface Configuration AreaSurface Area Configuration for services and Connections But i got this Error : No SQl SErver 2005 Components were found on the specified computer . Either no components are installed , or you are not a administrator on this computer (SQLSAC) I'm really tired because of that , and i don't know what's wrong with this . Some more information : I have no additonal software on my computer , like Antivirus or Proxy I tried all step with "Standard Edition" either , but no difference My user is Administrator I tried more than 5 times all those steps including re-installing Windows 7 . Please help me , I'm losing all my hair

    Read the article

  • Error opening hyperlinks in Excel 2003

    - by richardtallent
    When clicking to follow hyperlinks from Excel, I'm now getting this error: Unable to open http://blah... Cannot download the information you requested. The hyperlinks in the Excel file are created using the HYPERLINK() formula. I use Google Chrome as my default browser. The web site in question uses Basic Authentication, and I've entered correct credentials when prompted (the dialog looked like an IE auth box, not Chrome's, but it's always been that way, even when it was working properly). This hasn't been an issue until recently. I'm guessing our IT department made some lame change to IE's configuration that is causing Office to not be able to open the URLs, despite having Chrome as my browser. Things I've checked already: URLs are good, they work fine when pasted manually into Chrome, IE, or Firefox. IE is not set to Work Offline (already found that suggestion on Google). I checked Program Access and Defaults and verified that Chrome is selected. Nothing in the URL requires URLEncoding, so it's no goofy issue with encoding I've had reports from some other users now and then about the same problem, but this is the first time I've experienced it myself.

    Read the article

< Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >