Search Results

Search found 1554 results on 63 pages for 'ca bearsfan'.

Page 48/63 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Why do I have untrusted certificates for Google, Yahoo, Mozilla and others?

    - by jackweirdy
    In the HTTPS/SSL section of chrome://chrome/settings, I see the following: What does this mean, and is there something wrong? I have a basic understanding of SSL/TLS - I'm not claiming to be completely familiar, but I'm fairly confident I know my way around it - but I don't understand why I have certificates installed on my machine specifically for these sites. From my understanding, I should have the certificates for Certificate Authorities, and any site I visit and use SSL/TLS should have a certificate signed by one of these trusted CAs for me to trust the site. My worry is that if someone has maliciously installed a certificate for these sites on my machine, they could perform a DNS spoofing attack (or a number of other attacks) to hijack my connection to my email account without me knowing, and as they've got the private counterpart to the certificate on my machine, decrypt the communication. NB: I'm also aware that CA certificates aren't just within Chromium and are used system wide as part of libssl - they're stored in /etc/ssl/certs. What I'd like to know is: Is this correct? - The big red boxes make me think no Is this malicious or benign? What can I do to resolve this problem? (If indeed it is a problem) Thanks :)

    Read the article

  • Installing SATA dvd burner on machine with no spare SATA ports/connectors

    - by Faheem Mitha
    Greetings. I have the following motherboard Tyan Thunder K8WE S2895A2NRF Motherboard - extended ATX - nForce Pro 2200/2050 - Socket 940 - UDMA133, Serial ATA-300 (RAID) - 2 x Gigabit Ethernet - FireWire - 6-1 channel audio This is part of a computer that was assembled in the winter of 2006/2007. The user manual says the following with regard to SATA Integrated SATAII Generation 1 Controllers (from NForce Professional 2200) Two integrated dual port SATA II controllers Four SATA connectors support up to four drives 3 Gb/s per direction per channel NvRAID v2.0 support Supports RAID 0, 1, 0+1 and JBOD. I just purchased a SATA DVD burner. Here is the page for the product http://www.amazon.ca/gp/product/B002QGDWLK/ The problem I am facing is that I already have 4 SATA drives installed. I don't want to remove any of them. However, I want the DVD burner above installed as well. The person I am consulting with here (Bombay, India) tells me that my four available SATA ports are filled, and that my only option is to install a SATA card into the one free PCI slot on the motherboard. However, he says that with this setup I will not be able to boot from the DVD drive. Are these statements correct, and what are my other options if any? Even it the statements in the last para are true, I suppose I could use one of the motherboard connectors/ports there are currently being used with the hard drives with the DVD drive, and use the "add-on" connector with one of the hard drives. Not all the 4 hard drives need to be bootable. BTW, despite having read through http://en.wikipedia.org/wiki/Serial_ATA#Cables.2C_connectors.2C_and_ports I am fuzzy on the differences between connectors, cables and ports. Thanks in advance.

    Read the article

  • Is there any way for ME to improve routing to an overseas server?

    - by Simon Hartcher
    I am trying to make a connection to a gaming server in Asia from Australia, but my ISP routes my connection through the US. Tracing route to worldoftanks-sea.com [116.51.25.54]over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms 192.168.1.1 2 34 ms 42 ms 45 ms 10.20.21.123 3 40 ms 40 ms 43 ms 202.7.173.145 4 51 ms 42 ms 36 ms syd-sot-ken-crt1-ge-6-0-0.tpgi.com.au [202.7.171.121] 5 175 ms 200 ms 195 ms ge5-0-5d0.cir1.seattle7-wa.us.xo.net [216.156.100.37] 6 212 ms 228 ms 229 ms vb2002.rar3.sanjose-ca.us.xo.net [207.88.13.150] 7 205 ms 204 ms 206 ms 207.88.14.226.ptr.us.xo.net [207.88.14.226] 8 207 ms 215 ms 220 ms xe-0.equinix.snjsca04.us.bb.gin.ntt.net [206.223.116.12] 9 198 ms 201 ms 199 ms ae-7.r20.snjsca04.us.bb.gin.ntt.net [129.250.5.52] 10 396 ms 391 ms 395 ms as-6.r20.sngpsi02.sg.bb.gin.ntt.net [129.250.3.89] 11 383 ms 384 ms 383 ms ae-3.r02.sngpsi02.sg.bb.gin.ntt.net [129.250.4.178] 12 364 ms 381 ms 359 ms wotsg1-slave-54.worldoftanks.sg [116.51.25.54] Trace complete. Since I think it will be unlikely that my ISP will do anything, are there any ways to improve my routing to the server without them having to intervene? NB. The game runs predominately over UDP, so I believe most low ping services are out of the question, as they rely on TCP traffic.

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • An XKB keyboard map that responds to the left and right shift key individually

    - by mbfisher
    First off, excuse my ignorance of X and XKB; I've been trying to hack together a solution in the hope of being able to achieve what I want without requiring a detailed grasp of it. I'm trying to create an XKB keyboard map on Ubuntu 12.04 that allows me to stipulate which of the two shift keys constitutes the Level2 modifier. Specifically, the 4 key should only produce a $ when the right shift is held, not the left. My reading so far: http://www.charvolant.org/~doug/xkb/html/node5.html http://people.uleth.ca/~daniel.odonnell/Blog/custom-keyboard-in-linuxx11 http://www.x.org/releases/X11R7.5/doc/input/XKB-Enhancing.html Lots of searching! I've attempted to define a custom type, and then refer to it explicitly in a symbols map: /usr/share/X11/xkb/types/mbfisher: default xkb_types "mbfisher" { type "RIGHT_SHIFT" { modifiers = None+Shift_R; map[None] = Level1; map[Shift_R] = Level2; }; } /usr/share/X11/xkb/symbols/mbfisher: default partial alphanumeric_keys xkb_symbols "basic" { name[Group1]= "mbfisher"; key <AE04> { type= "RIGHT_SHIFT", symbols[Group1]= [ 4, dollar ] }; }; I'm then selecting the map with the Ubuntu Keyboard Layout GUI. This obviously disables the alphanumeric keyboard apart from the 4 key, but the dollar sign can still be typed with either shift key. I'm conscious of writing a massive question with lots of useless information so I'll stop here; please ask for anything I've missed out. Any ideas?

    Read the article

  • How can I get the root account to generate an acceptable ssh key?

    - by Jamie
    On an ubuntu machine I did the following: ~$ sudo su - [sudo] password for jamie: root@mydomain:~# ssh-keygen -t rsa Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: 12:34:56:78:9a:bc:de:f0:12:34:56:78:9a:bc:de:f0 root@mydomain.ca The key's randomart image is: +--[ RSA 2048]----+ | | | | | | | | | | | | | | | | | | +-----------------+ root@mydomain:~# cat /root/.ssh/id_rsa.pub | ssh -p 443 [email protected] 'cat > authorized_keys' [email protected]'s password: root@mydomain:~# ssh -p 443 [email protected] [email protected]'s password: It's asking me for a password. However, using a regular account, the following works: $ cd ; ssh-keygen -t rsa ; cat ~/.ssh/id_rsa.pub | ssh [email protected] 'cat >> ~/.ssh/authorized_keys' $ ssh [email protected] Last login: Thu Oct 24 14:48:41 2013 from 173.45.232.105 [[email protected] ~]$ Which leads me to believe it's not an issue of authorized_keys versus authorized_keys2 or permissions. Why does the 'root' account accessing the remote 'jamie' account not work? The remote machine is CentOS if that's relevant.

    Read the article

  • How to set up IP forwarding on Nexenta (Solaris)?

    - by Gleb
    I am trying to set up IP forwarding on my Nexenta box: root@hdd:~# uname -a SunOS hdd 5.11 NexentaOS_134f i86pc i386 i86pc Solaris The box has 2 network interfaces: root@hdd:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 e1000g1: flags=1001100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4,FIXEDMTU> mtu 1500 index 2 inet 192.168.12.2 netmask ffffff00 broadcast 192.168.12.255 ether 68:5:ca:9:51:b8 myri10ge0: flags=1100843<UP,BROADCAST,RUNNING,MULTICAST,ROUTER,IPv4> mtu 9000 index 3 inet 10.10.10.10 netmask ffffff00 broadcast 10.10.10.255 ether 0:60:dd:47:87:2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 192.168.12.0 is my normal LAN with 192.168.12.1 being the firewall/gateway 10.10.10.0 is a separate LAN for iSCSI (with no internet access) I want to set up IP forwarding so that a computer on 10.10.10.0 will be able to access the internet by using 10.10.10.10 as a gateway (I don't need any port forwarding) I have turned on IP forwarding: root@hdd:~# routeadm Configuration Current Current Option Configuration System State --------------------------------------------------------------- IPv4 routing disabled disabled IPv6 routing disabled disabled IPv4 forwarding enabled enabled IPv6 forwarding disabled disabled Routing services "route:default ripng:default" Routing daemons: STATE FMRI disabled svc:/network/routing/rdisc:default disabled svc:/network/routing/route:default disabled svc:/network/routing/legacy-routing:ipv4 disabled svc:/network/routing/legacy-routing:ipv6 disabled svc:/network/routing/ripng:default online svc:/network/routing/ndp:default But when I dry to start ipnat, I get an error: root@hdd:~# ipnat -CF -f /etc/ipf/ipnat.conf ioctl(SIOCGNATS): I/O error Here is the config: root@hdd:~# cat /etc/ipf/ipnat.conf #!/sbin/ipnat -f - # map e1000g1 10.10.10.10/24 -> 192.168.12.2/32 So the question is how to fix this.. Thanks in advance!

    Read the article

  • How much network latency is "typical" for east - west coast USA?

    - by Jeff Atwood
    At the moment we're trying to decide whether to move our datacenter from the west coast to the east coast. However, I am seeing some disturbing latency numbers from my west coast location to the east coast. Here's a sample result, retrieving a small .png logo file in Google Chrome and using the dev tools to see how long the request takes: West coast to east coast: 215 ms latency, 46 ms transfer time, 261 ms total West coast to west coast: 114 ms latency, 41 ms transfer time, 155 ms total It makes sense that Corvallis, OR is geographically closer to my location in Berkeley, CA so I expect the connection to be a bit faster.. but I'm seeing an increase in latency of +100ms when I perform the same test to the NYC server. That seems .. excessive to me. Particularly since the time spent transferring the actual data only increased 10%, yet the latency increased 100%! That feels... wrong... to me. I found a few links here that were helpful (through Google no less!) ... Does routing distance affect performance significantly? How does geography affect network latency? Latency in Internet connections from Europe to USA ... but nothing authoritative. So, is this normal? It doesn't feel normal. What is the "typical" latency I should expect when moving network packets from the east coast <--> west coast of the USA?

    Read the article

  • Bridged network on OS X only gets UDP broadcast traffic

    - by a paid nerd
    I've created a bridged network Mac OS X 10.8.5 using ifconfig and TUNTAP for OS X to bridge my wireless connection, en0, with a virtual interface, tap0, which I can use for guest VMs: $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 28:cf:xx:xx:xx:xx inet6 xxxx::xxxx:xxxx:xxxx:xxxx%en0 prefixlen 64 scopeid 0x4 inet 192.168.100.64 netmask 0xffffff00 broadcast 192.168.100.1 media: autoselect status: active bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) However, if I tcpdump -i tap0, I only see broadcast traffic. Shouldn't I see a mirror of everything on en0? (192.168.100.33, the host doing the broadcasting, is another unrelate, noisy server on my LAN.) (I asked a similar question here and will probably close it.)

    Read the article

  • Setting up HTTPS across multiple servers

    - by JohnyD
    I'm looking to offer our online services over https and I'm having a couple of problems understanding how to accomplish this. To access our services you must pass through our ISA firewall to a Win2000 server running IIS6. About half our services are located here and the other half take you to a Win2003 server also running IIS6. So, in order to achieve this must each server have the proper certificate installed? ISA, IIS6_1 and IIS6_2? Is there a separate configuration that must be made to our ISA firewall? The other problem is with the CA and knowing how many certificates I need. It's important to note that the domain name for our services on IIS6_1 is www.domainname.com but the domain name on IIS6_2 is services.domainname.com. I believe that this will require me to purchase more than one certificate. It looks as though we will be going with Thawte's SSL123 as it's a good name and it's fast to get. Will I need to purchase 2 certificates (one for www that will be installed on our ISA firewall as well as IIS6_1, and one for services.domainname.com on IIS6_2)? Or will I need to purchase 3, the extra one being used on our firewall server? Another side question is about SAN's (subject alternative names). Is this basically adding sub-domains to your cert? So I could purchase one cert with 1 SAN for my www and services.? Thanks a lot for your help! Please let me know if I can provide any further information.

    Read the article

  • Hostname and SSL (apache) issue on Debian

    - by user105566
    I have been trying to setup SSL virtual host ServerAdmin [email protected] ServerName moclm.tap.pt DocumentRoot /var/www/tapme/ <Directory /> Options FollowSymLinks AllowOverride All Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> <Directory /var/www/tapme/> Options -Indexes FollowSymLinks MultiViews AllowOverride All #Order allow,deny #allow from all </Directory> SSLEngine on SSLCertificateFile /etc/ssl/moclm.cer SSLCertificateKeyFile /etc/ssl/moclm.pem </VirtualHost> For some reason, the server automatically redirect to SSL (http:// to https://). The apache is not configured to redirect and application was working fine on port 80 only. I have no knowledge how the internal network works as i am working remotely. The SSL error logs show: [Tue Oct 02 22:40:32 2012] [error] Hostname linemnt01.tap.pt provided via SNI and hostname moclm.tap.pt provided via HTTP are different I thought may be the hostname has some issue and have changed the hostname of the server from "linemnt01.tap.pt" to "moclm.tap.pt" but the issue is still there. I am getting the following error on browser: Bad Request Your browser sent a request that this server could not understand. i have /etc/hosts: 127.0.0.1 localhost.localdomain localhost moclm.tap.pt moclm and openssl returns: openssl verify -CAfile cert-CA.cer moclm.cer moclm.tap.pt.cer: OK I have been trying to troubleshoot the issue but no luck. Need help Thanks

    Read the article

  • Why can`t we treat SSL Certs like Pgp keys instead of trusting CAs?

    - by yarun can
    I am dumb and stupid and I do not know all the technical aspects of SSL and server/client side implications and implementations. However I understand them good enough from user point of view to use SSL and encyrption daily. I was thinking that how silly it is to trust some unknown/known CAs when it comes to our our certificates for our servers. There had been many cases of misconduct, misuse, compromises and theft of certificates/ca keys from those places. On top of those known issues we also have to pay these guys regularly. I am wondering why can not we use/treat web server certificates like we use our pgp keys? So I sign a SSL certificate and send to a central server. And then each user accessing my site checks the validity and the keys from some central server (like pgp key servers). Is this a stupid idea? If so what could be a better idea than current system of issuing valid certificates. I am looking for a better than more secure idea. Naturally this is not a solution to an existing problem, rather it will be a hypothetical solution for some future implementation of a currently messed up web of trust on the internet due to recent news about NSA and their criminal buddies around the world. thanks

    Read the article

  • Zend Optimizer not Functioning Correctly on Plesk 9.3.0 VPS

    - by dallasclark
    I have a new VPS running Plesk 9.3.0 without 'much' modifications to any settings. I've moved a site to this VPS and I'm receiving a page full of random 'gibbrish' characters like: Zend2003120702116268102798xù Ÿ2½}MŒ%ÇqæCwËg¸„ÖXXZ[ÆùÿCK¢FŠäš’(’¢-ÂÒèu¿zš6gºÇÝ=$Ec:-xá=èàƒÃ ôžL/`,¼'û$èdû$ð ›±OYïUUdfde½á›GâcWTfDdF|‘™‘QÕ_nN‡OÝ›Ÿ/ú9¾¢»"…çÎ =B³øo/=÷…?úúW?·/LX5¯ß½ ðtEÍ ãB„ð÷øìÞéåU®•òÊëZÈi^¿lN/NÎNoÞ›/šÅC׸”šÅLËÏåùÉ+Ü á¸a6Ê÷Ž..ϯrç…Õ–)Õþñòüvsz•{å mî!F³ã[çWsÖZ%k'-ÐÝ<¬þZ1B¡¼ "-ÏîH @/Ü´b.Ï›ù"ü tb¼Ò!”]œ¼ïŠ6–Ál \Ü;½hÎOößh®^“4#…s¡CÀ†æôUèP³Ð§3¦¬“; –j‡ìþb¤÷š»¶³Wçç7÷îÜ…w•bÞs«[ÆÎav,@ÿ´ÜéÖåÌfž¯þVÚlö‹½ÎÛØå#Èoòudñ^÷чW+ÕSsÐý¹w˜7Ÿò«{ò…?<Ìo1»èZÄN_ð³»·îqr÷Vs¾"ýµ¾§þˆ¡v Ù.j†Çï®#{îÞüÞú¿ºý²Q0âLõ$rv¥{»[à|sÝwxþðúy¯)þ • 7ÛŽ È^YËZá‘JV<|·g“l2£{µ«Ù›=é§eCÍîõÖ»ÓÖQtL´D?ε܃ÁªÇ3=ﯸ^=þAIÏjöÐÁ0¡ò¥ 2øÙŸÞçÝÊéqÔ€Lï÷*+Jo¬õLͺFøì x¨ÕìÛ'GH“æådD)ÿ:¨5¼q±¦rÖøLf“Ðj îÅõ¬éa÷[!_zöN?þ"™†á©›0Ý{ˆWóª‘ÁH4µx5+Ë^–Ž›·ÉöŠd1¹Õ¬ phpinfo() shows PHP is running on the Zend engine. This server is unmanaged so I cannot ask the hosting provider for assistance. Any help big or small will be appreciated. [root@vps ~]# php -v PHP 5.1.6 (cli) (built: Mar 31 2010 02:39:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • Installation of Active Directory on separate VM from DNS does not entierly work - not sure why

    - by René Kåbis
    Not sure what I am doing wrong here. I have a moderately midrange server (16 cores, 2Ghz, 32GB ECC REG RAM, 6TB storage, nothing too extreme) where I am running Hyper-V (Server 2012 R2 Enterprise) in order to provision virtual machines. So why an AD separate from DNS? I want redundancy. I want to be able to move VMs and back them up individually and not have too many services on any one VM. I have already provisioned a VM with DNS, and have set it up right -- essentially, I have: Set up Static IP’s for everyone involved. Installed the DNS service on the DNS VM. Created a forward lookup zone and a reverse lookup zone (primary zone) xyz.ca Configured the zones to use nonsecure and secure dynamic updates (i will change this to secure later after the domain controller is online). Created a A record for the DC in the forward lookup zone (and a reverse ptr) Changed DC’s DNS server (network settings) to the new DNS server. Checked that I can ping the dns server from the new DC by hostname. When I went ahead and did a DCpromo on the DC, and un-cheked the “install DNS” option, everything seemed to go well (no error messages), but I saw no changes on the DNS server whatsoever (no additional settings). Plus, the DNS server seems to be unable to join the domain, as it claims that the domain is not discoverable. As a final note, I do run Symantec Endpoint Protection, which includes a firewall and most settings set as default. I have not yet tried turning this off, but my experience has been that if a service would open up a port on a Windows firewall, it would do the same through Symantec. There is pretty tight integration these days with corporate-class AV and Windows. I have a template vhdx fully set up (just short of any special roles and features) that I can use to replace the current AD VM with, so doing this all over again is not too much skin off of my nose.

    Read the article

  • Help: Setup Outgoing Mail Server Only for Multiple Domains Using Postfix?

    - by user57697
    I want an outgoing mail server ONLY for multiple domains. I plan to use Postfix as that seems to be the easiest to setup being very new to Ubuntu/Linux. The setup I plan to have are as follows: I want to use virtual domain with postfix i.e. my multiple websites must be able to send an email from each their respective domains i.e. [email protected] is sent from my domain1.com website and [email protected] is sent from domain2.com website This is an outgoing mail server only i.e. I don't want any returned (or otherwise) email sent to my postfix server. Incoming mail is handled by Google Apps/Gmail and is already setup. I already set my SPF recording to designate my mx records and postfix server ip as valid email servers i.e. "v=spf1 mx include:mydomain.com -all" How can I achieve this? I'm frankly a little confused, so some help would be appreciated. I attempted to follow these guides here, but it doesn't seem right (and it isn't clear what all the settings mean): How to configure Postfix virtual domains http://www.sysdesign.ca/guides/postfix_virtual.html Postfix Installation *.slicehost.com/2008/7/29/postfix-installation Basic Postfix settings (main.cf) *.slicehost.com/2008/7/31/postfix-basic-settings-in-main-cf I can only post one link, but those articles above can be found by replacing * with articles in the hyperlink.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • OpenSSL: how to setup an OCSP server for checking third-party certificates?

    - by StackedCrooked
    I am testing the Certificate Revocation functionality of a CMTS device. This requires me to setup a OCSP responder. Since it will only be used for testing I assume that the minimal implementation provided by OpenSSL should suffice. I have extracted the a certificate from a cable modem, copied it to my PC and converted it to the PEM format. Now I want to register it in the OpenSSL OCSP database and start a server. I have completed all these steps, but when I do a client request my server invariably responds with "unknown". It seems to be completely unaware of my certificate's existence. I would greatly appreciate if anyone would be willing to have a look at my code. For your convenience, I have created a single script consisting of a sequential list of all used commands, from setting up the CA until starting the server: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/AllCommands.sh You can also find the custom config file and the certificate that I am testing with: http://code.google.com/p/stacked-crooked/source/browse/trunk/Misc/OpenSSL/ Any help would be greatly appreciated.

    Read the article

  • Postfix 554 <[email protected]>: Relay access denied

    - by Matt
    So i am trying to set postfix up and I am running into some problems.....here is my files vim /etc/postfix/main.cf relayhost = [smtp.gmail.com]:587 smtp_connection_cache_destinations = smtp.gmail.com smtp_sasl_auth_enable=yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_tls_security_options = noanonymous tls_random_source = dev:/dev/urandom smtp_tls_CAfile= /etc/pki/CA/cacert.pem smtp_tls_security_level = may smtp_tls_scert_verifydepth = 9 append_dot_mydomain = no readme_directory = no myhostname = maggie.deliverypath.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = maggie.deliverypath.com, localhost.deliverypath.com, , localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I also have the gmail password info vim /etc/postfix/sasl_passwd gmail-smtp.l.google.com [email protected]:somepass smtp.gmail.com [email protected]:somepass then I try to follow this article and i get this output telnet mail.demoslice.com 25 Trying 67.207.128.80... Connected to www.slicehost.com. Escape character is '^]'. 220 www.slicehost.com ESMTP Postfix (Ubuntu) HELO test.demoslice.com 250 www.slicehost.com MAIL FROM:<[email protected]> 250 Ok RCPT TO:<[email protected]> 554 <[email protected]>: Relay access denied its started service postfix start * Starting Postfix Mail Transport Agent postfix ...done. then the screen gets frozen and i cant do anything....any ideas

    Read the article

  • Trouble installing SSL Certificate on Apache

    - by jahufar
    We have a dedicated server with GoDaddy running Plesk that requires SSL. I've generated the certificate files and I created a vhost_ssl.conf (since I can't edit the default plesk apache configuration http.include, vhost_ssl.conf gets Included to httpd.include) that tells apache where to find the certificate files: SSLCertificateFile /usr/local/psa/var/certificates/domain.com.crt SSLCertificateKeyFile /usr/local/psa/var/certificates/domain.com.key SSLCertificateChainFile /usr/local/psa/var/certificates/sub.class1.server.ca.pem When I stop/start apache, it refuses to start up. The error_log does not have anything on it either (which is strange). Then I opened up httpd.include and found this bit: <VirtualHost 208.xxx.xxx.xxx:443> ServerName domain.com:443 ServerAlias www.domain.com UseCanonicalName Off SSLEngine on SSLVerifyClient none SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 Include /var/www/vhosts/domain.com/conf/vhost_ssl.conf Then I commented out SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 (which is plesk's SSL certificate) and restarted apache and it worked perfectly fine. It seems that Apache does not like multiple SSLCertificateFile within the same VirtualHost directive? As anyone who worked with plesk knows, I can't just remove SSLCertificateFile directive in httpd.include as plesk will overwrite my changes when someone uses it - which is why it's in vhost_ssl.conf. So I'm stuck and this is beyond my meager admin skills. Would appreciate someone who knows what (s)he's doing to tell me whats going on. Thanks in advance.

    Read the article

  • SSL certificates work fine from command line but fail in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Problem while running the j2me application

    - by Paru
    I am not able to view any content in the emulator while running the application. The Build is not failed and i am able run the application successfully. While i am closing the emulator i am getting an error. i can provide both code and log here. import javax.microedition.lcdui.; import javax.microedition.midlet.; import java.io.; import java.lang.; import javax.microedition.io.; import javax.microedition.rms.; public class Login extends MIDlet implements CommandListener { TextField ItemName=null; TextField ItemNo=null; TextField UserName=null; TextField Password=null; Form authForm,mainscreen; TextBox t = null; StringBuffer b = new StringBuffer(); private Display myDisplay = null; private Command okCommand = new Command("Login", Command.OK, 1); private Command exitCommand = new Command("Exit", Command.EXIT, 2); private Command sendCommand = new Command("Send", Command.OK, 1); private Command backCommand = new Command("Back", Command.BACK, 2); private Alert alert = null; public Login() { ItemName=new TextField("Item Name","",10,TextField.ANY); ItemNo=new TextField("Item No","",10,TextField.ANY); myDisplay = Display.getDisplay(this); UserName=new TextField("Your Name","",10,TextField.ANY); Password=new TextField("Password","",10,TextField.PASSWORD); authForm=new Form("Identification"); mainscreen=new Form("Logging IN"); mainscreen.addCommand(sendCommand); mainscreen.addCommand(backCommand); authForm.append(UserName); authForm.append(Password); authForm.addCommand(okCommand); authForm.addCommand(exitCommand); authForm.setCommandListener(this); myDisplay.setCurrent(authForm); } public void startApp() throws MIDletStateChangeException { } public void pauseApp() { } protected void destroyApp(boolean unconditional) throws MIDletStateChangeException { } public void commandAction(Command c, Displayable d) { if ((c == okCommand) && (d == authForm)) { if (UserName.getString().equals("") || Password.getString().equals("")){ alert = new Alert("Error", "You should enter Username and Password", null, AlertType.ERROR); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } else{ //myDisplay.setCurrent(mainscreen); login(UserName.getString(),Password.getString()); } } if ((c == backCommand) && (d == mainscreen)) { myDisplay.setCurrent(authForm); } if ((c == exitCommand) && (d == authForm)) { notifyDestroyed(); } if ((c == sendCommand) && (d == mainscreen)) { if(ItemName.getString().equals("") || ItemNo.getString().equals("")){ } else{ sendItem(ItemName.getString(),ItemNo.getString()); } } } public void login(String UserName,String PassWord) { HttpConnection connection=null; DataInputStream in=null; String url="http://olario.net/submitpost/submitpost/login.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (UserName.length()+PassWord.length())); connection.setRequestProperty("username",UserName); connection.setRequestProperty("password",PassWord); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String auth=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(auth.equals("ok")){ mainscreen.setCommandListener(this); myDisplay.setCurrent(mainscreen); } } catch(IOException x){ } } public void sendItem(String itemname,String itemno){ HttpConnection connection=null; DataInputStream in=null; String url="http://www.olario.net/submitpost/submitpost/submitPost.php"; OutputStream out=null; try { connection=(HttpConnection)Connector.open(url); connection.setRequestMethod(HttpConnection.POST); connection.setRequestProperty("IF-Modified-Since", "2 Oct 2002 15:10:15 GMT"); connection.setRequestProperty("User-Agent","Profile/MIDP-1.0 Configuration/CLDC-1.0"); connection.setRequestProperty("Content-Language", "en-CA"); connection.setRequestProperty("Content-Length",""+ (itemname.length()+itemno.length())); connection.setRequestProperty("itemCode",itemname); connection.setRequestProperty("qty",itemno); out = connection.openDataOutputStream(); out.flush(); in = connection.openDataInputStream(); int ch; while((ch = in.read()) != -1) { b.append((char) ch); //System.out.println((char)ch); } //t = new TextBox("Reply",b.toString(),1024,0); //mainscreen.append(b.toString()); String send=b.toString(); if(in!=null) in.close(); if(out!=null) out.close(); if(connection!=null) connection.close(); if(send.equals("added")){ alert = new Alert("Error", "Send Successfully", null, AlertType.INFO); alert.setTimeout(Alert.FOREVER); myDisplay.setCurrent(alert); } } catch(IOException x){ } } } and the log is pre-init: pre-load-properties: exists.config.active: exists.netbeans.user: exists.user.properties.file: load-properties: exists.platform.active: exists.platform.configuration: exists.platform.profile: basic-init: cldc-pre-init: cldc-init: cdc-init: ricoh-pre-init: ricoh-init: semc-pre-init: semc-init: savaje-pre-init: savaje-init: sjmc-pre-init: sjmc-init: ojec-pre-init: ojec-init: cdc-hi-pre-init: cdc-hi-init: nokiaS80-pre-init: nokiaS80-init: nsicom-pre-init: nsicom-init: post-init: init: conditional-clean-init: conditional-clean: deps-jar: pre-preprocess: do-preprocess: Pre-processing 0 file(s) into /home/sreekumar/NetBeansProjects/Login/build/preprocessed directory. post-preprocess: preprocess: pre-compile: extract-libs: do-compile: post-compile: compile: pre-obfuscate: proguard-init: skip-obfuscation: proguard: post-obfuscate: obfuscate: lwuit-build: pre-preverify: do-preverify: post-preverify: preverify: pre-jar: set-password-init: set-keystore-password: set-alias-password: set-password: create-jad: add-configuration: add-profile: do-extra-libs: nokiaS80-prepare-j9: nokiaS80-prepare-manifest: nokiaS80-prepare-manifest-no-icon: nokiaS80-create-manifest: jad-jsr211-properties.check: jad-jsr211-properties: semc-build-j9: do-jar: nsicom-create-manifest: do-jar-no-manifest: update-jad: Updating application descriptor: /home/sreekumar/NetBeansProjects/Login/dist/Login.jad Generated "/home/sreekumar/NetBeansProjects/Login/dist/Login.jar" is 3501 bytes. sign-jar: ricoh-init-dalp: ricoh-add-app-icon: ricoh-build-dalp-with-icon: ricoh-build-dalp-without-icon: ricoh-build-dalp: savaje-prepare-icon: savaje-build-jnlp: post-jar: jar: pre-run: netmon.check: open-netmon: cldc-run: Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Copying 1 file to /home/sreekumar/NetBeansProjects/Login/dist/nbrun4244989945642509378 Jad URL for OTA execution: http://localhost:8082/servlet/org.netbeans.modules.mobility.project.jam.JAMServlet//home/sreekumar/NetBeansProjects/Login/dist//Login.jad Starting emulator in execution mode Running with storage root /home/sreekumar/j2mewtk/2.5.2/appdb/temp.DefaultColorPhone1 /home/sreekumar/NetBeansProjects/Login/nbproject/build-impl.xml:915: Execution failed with error code 143. BUILD FAILED (total time: 35 seconds)

    Read the article

  • How do I create a wifi network bridge with qemu on OS X?

    - by a paid nerd
    I grabbed a small FreeBSD live CD and QEMU, and I'm trying to bridge my Mac OS X 10.8 wifi connection so that the guest OS is available on my LAN. However, the guest OS never gets a DHCP lease. This works perfectly with VirtualBox in their "bridged" network mode, so I know it can be done. I need to get it working with QEMU because VirtualBox doesn't support the architecture that I need for this project. Here's what I've done so far based on hours of googling: Installed TUNTAP for OS X Told OS X to supposedly forward all packets, even ARP: (NOTE: This doesn't appear to work.) $ sudo sysctl -w net.inet.ip.forwarding=1 $ sudo sysctl -w net.link.ether.inet.proxyall=1 $ sudo sysctl -w net.inet.ip.fw.enable=1 Created a bridge: $ sudo ifconfig bridge0 create $ sudo ifconfig bridge0 addm en0 addm tap0 $ sudo ifconfig bridge0 up $ ifconfig bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether ac:de:xx:xx:xx:xx Configuration: priority 0 hellotime 0 fwddelay 0 maxage 0 ipfilter disabled flags 0x2 member: en0 flags=3<LEARNING,DISCOVER> port 4 priority 0 path cost 0 member: tap0 flags=3<LEARNING,DISCOVER> port 8 priority 0 path cost 0 tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500 ether ca:3d:xx:xx:xx:xx open (pid 88244) Started tcpdump with -I in the hopes that it enables promiscuous mode on the wifi device: $ sudo tcpdump -In -i en0 Run QEMU using the bridged network instructions: $ qemu-system-x86_64 -cdrom mfsbsd-9.2-RELEASE-amd64.iso -m 1024 \ -boot d -net nic -net tap,ifname=tap0,script=no,downscript=no But the guest system never gets a DHCP lease: If I tcpdump -ni tap0, I see lots of traffic from the wireless network. But if I tcpdump -ni en0, I don't see any DHCP traffic from the QEMU guest OS. Any ideas? Update 1: I tried sudo defaults write "/Library/Preferences/SystemConfiguration/com.apple.Boot" "Kernel Flags" "net.inet.ip.scopedroute=0" and rebooting per this mailing list suggestion, but this didn't help. In fact, it made VirtualBox bridged mode stop working.

    Read the article

  • How to have SSL on Amazon Elastic Load Balancer with a Gunicorn EC2 server?

    - by Riegie Godwin
    I'm a self taught back end engineer so I'm learning all of this stuff as I go along. For the longest time, I've been using basic authentication for my users. Many developers are advising against this approach since each request will contain the username & password in clear text. Anyone with the right skills can sniff on the connection between my iOS application and my Django/Gunicorn Server and obtain their password. I wouldn't want to put my user's credentials at risk so I would like to implement a more secure way of authentication. SSL seems to be the most viable option. My server doesn't serve any static content or anything crazy of that sort. All the server does is send and receive "json" responses from and to my iOS application. Here is my current topology. iOS application ------ Amazon Elastic Load Balancer ------- EC2 Instances running HTTP Gunicorn. Gunicorn runs on port 8000. I have a CNAME record from GoDaddy for the Amazon Elastic Load Balancer DNS. So instead of using the long DNS to make requests, I just use server.example.com. To interact with my servers I send and receive requests to server.example.com:8000/ This setup works and has been solid. However I need to have a more secure way. I would like to setup SSL between my iOS application and my Elastic Load Balancer. How can I go about doing this? Since I am only sending json responses to my application, do I really need to buy a certificate from a CA or can I create my own? (since browsers will not be interacting with my servers. My servers are only designed to send json responses to my iOS application).

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • first time setting up ssl, running into a strange problem, tutorials haven't been too helpful

    - by pedalpete
    This is my first time trying to set-up an ssl for one a site, and I'm running it on a server that has 3 other sites already hosted. I'm running apache2.?? and the install came with an ssl.conf page. The ssl.conf has the following settings LoadModule ssl_module modules/mod_ssl.so Listen 443 AddType application/x-x509-ca-cert .crt AddType application/x-pkcs7-crl .crl <VirtualHost *:443> ServerAdmin [email protected] DocumentRoot /var/www/html/securesite ServerName securesite.com ErrorLog logs/securesite-error_log CustomLog logs/securesite-access_log common SSLEngine on SSLCertificateFile /etc/httpd/ssl.crt/securesite.com.crt SSLCertificateKeyFile /etc/httpd/ssl.key/server.key SSLCertificateChainFile /etc/httpd/ssl.crt/gd_bundle.crt </VirtualHost> When I run 'apachectl configtest', I don't get any errors, but running 'apachectl -k restart', I get 'httpd not running, trying to start'. I have two questions 1) Is there an error in the way I'm defining my virtualhost for 443?? the rest of my entries point to <VirtualHost *:80. When I comment out the above entry, apache runs fine. 2) do I need to set-up a redirect from port 80 for secure site? Because most users are going to go to http: or www. , and I need to send them to https: does apache do this automatically? or do i need to create an entry with a redirect?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >