Search Results

Search found 21212 results on 849 pages for 'apt key'.

Page 330/849 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • In 12.04 LTS, I get "No DBus" errors when running things as root

    - by Seann
    While attempting to run gEdit as root from a terminal window (was trying to do some tweaking on my HOSTS and FSTAB files), I get a message saying "No DBus connection available" and get booted back to the prompt. However, I can run Nautilus from the prompt like that (still get the error, but it runs all the same), and use WINE and NOTEPAD, and was able to make my changes. I thought maybe DBUS was missing, but APT says it's installed and gEdit runs fine when not elevated. Granted, I don't have to elevate often, but on the off-chance I do, (like adding or changing SMB/CIFS mountpoints in FSTAB), I would like to use gEdit, not NOTEPAD from WINE, and not in a terminal window with VI (well VIM). Ideas? Solutions?

    Read the article

  • Ubuntu 13.10 install VMware 9.0

    - by user212290
    After I install the VMware workstation 9.0, while when I want open the VM, there come the dialogue "Before you can run VMware, several modules must be complied and loaded into the running kernel CANCEL INSTALL",while I clicked the INSTALL button, nothing happened. When: sudo apt-get install linux-headers-3.11.0-12-generic sudo /usr/bin/vmware-modconfig --icon=vmware-workstation --appname=VMware come: cc1: some warnings being treated as errors make[2]: *** [/tmp/modconfig-T9k19t/vmci-only/linux/driver.o] Error 1 make[2]: *** Waiting for unfinished jobs.... make[1]: *** [_module_/tmp/modconfig-T9k19t/vmci-only] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.11.0-12-generic' make: *** [vmci.ko] Error 2 make: Leaving directory `/tmp/modconfig-T9k19t/vmci-only' Failed to build vmci. Failed to execute the build command. Starting VMware services: Virtual machine monitor done Virtual machine communication interface failed VM communication interface socket family done Blocking file system done Virtual ethernet failed VMware Authentication Daemon done

    Read the article

  • Issues with Dz77BH-55K Motherboard and i7 processor on 12.04

    - by Naveed
    I just built a computer with Intel's DZ77BH-55K motherboard with i7-3770 processor. On 12.04, 11.10, and 11.04 and Linux Mint 12, the computer has been really laggy. The graphics aren't working (choppy effects, bad resolution) and the keyboard and mouse inputs are even laggy and unreliable (skips keystrokes). I'm not sure what the problem is or what I can do to fix it. I tried sudo apt-get install mesa-utils but nothing changed. I've also messed around in the BIOS but no luck there either. Any ideas? Could it possibly be a hardware issue?

    Read the article

  • How can I run samba?

    - by depesz
    I have server running Ubuntu 10.10. Never used samba before, as I never had windows machines, but now I need it. So I did: apt-get install samba smbfs smbclient. Packages are installed, but I have no idea how to configure it. All howtos I found on the net relate to /etc/samba/something.conf, where I don't even have /etc/samba directory. The only config I found is /etc/default/samba, which contains (aside from comments) only: RUN_MODE="daemons" All I want is to be able to have access to some directories on the Ubuntu machine from Windows, nothing else.

    Read the article

  • How to start jenkins?

    - by Jeffery Bingham
    I installed jenkins via sudo apt-get install jenkins. However, it doesn't start up. Tried to start it manually using sudo /etc/init.d/jenkins start. But it says this message when I try to start it that way: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.67" (uid=1000 pid=7970 comm="start jenkins ") interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply="0" destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")" init.d method just says starting, but never starts... How do I fix this / get jenkins to start up?

    Read the article

  • Logging in over and over again. How to fix this?

    - by romeovs
    Ok, I messed up. I installed ubuntu 11.10, installed awesome wm and removed unity, to have something to fall back on, I also installed gnome-session-fallback. I was messing around and did the following, because the awesome wiki told me to: gconftool-2 --type bool --set /apps/nautilus/preferences/show_desktop False # Still disable the buggy Nautilus desktop thing gconftool-2 --type string --set /desktop/gnome/session/required_components/windowmanager awesome # sets awesome as wm Now here's what's wrong: I can start up decently, and then I get into a login window (that of gnome-session-fallback). I enter my username, select the preferred window manager (awesome in my case) and enter my password. It accepts these, but then hold for a second and just opens the login window again, in effect preventing me from actually logging in. I also tried gconftool-2 --unset (from the tty) on these settings, but that didn't work either. What can I do to revert the gconftool-2 settings to something that should work? I tried apt-get purging gnome-session-fallback and lightdm, and then installing them again, but that didn't work.

    Read the article

  • Friday Tips #6, Part 2

    - by Chris Kawalek
    Here is a question about updating Oracle VM: Question: How can I perform Oracle VM 3 server updates from Oracle VM Manager? Answer by Gregory King, Principal Best Practices Consultant, Oracle VM Product Management: Server Update Manager is a built-in feature of the Oracle VM Manager. Basically, Server Update Manager automatically configures YUM updates on all the Oracle VM Servers, pointing each to our Unbreakable Linux Network (ULN) update channel for Oracle VM. The servers periodically check with our Oracle YUM repository and notify the Oracle VM Manager that an update is available for each server. Actual server updates must be triggered by the Oracle VM administrator – they are not executed automatically. At this point, you can use the Oracle VM Manager to put a server into maintenance mode which live migrates all the running Oracle VM Guests to other Oracle VM Servers in the server pool. Once all the Oracle VM Guests have been migrated, the Oracle VM administrator can trigger the update on the server. The entire process is documented in the Installation and Upgrade Guide of Oracle VM Documentation so I won’t spend time detailing the steps. However, configuring the Server Update Manager is exceedingly simple. Simply navigate to the Tools and Resources tab in the Oracle VM Manager, select the link for Server Update Manager and ensure the following values are added to the text boxes as shown in the illustration below: YUM Base URL: http://public-yum.oracle.com/repo/OracleVM/OVM3/latest/x86_64 YUM GPG Key: file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle Every server in the pool will be automatically configured for YUM updates once you choose the Apply button. Many thanks to Greg and Rick for providing the answers to this week's questions. If you want to ask us something, hit up Twitter and use hashtag #AskOracleVirtualization. See you next week! -Chris 

    Read the article

  • Install a Mirror without downloading all the packages in the official repository

    - by Sam
    I first gonna explain the situation : ( The two PCs are running Ubuntu 12.04 ) I have a Laptop which is connected to a wifi connection, and a Desktop which can not be connected to Internet ( the modem is too far from it ), and i want to install some software to the last one. ( the two PCs are connected with an Ethernet cable ) I've already searched for a solution, but all i found was the use of some softwares that should have been already installed on the "Internet-less PC". ( Keryx, APTonCD ... ) What I want to do is to create a mirror in my laptop which contain the packages i have in this one ( situated in /var/cache/apt/archive ) and i don't want to download all the packages from the official repository, I don't need them. Can someone tell me if this is possible ? Thank you.

    Read the article

  • Openbox overhead is similar to that with gnome-panel

    - by drN
    I just installed openbox via sudo apt-get install openbox. It already has obconf, btw. I noticed that when I logged into my openbox session instead of the one I usually use (gnome-panel and NOT Ubuntu 2D or one of the high overhead environments) and checked via htop, I found that a similar amount of RAM was being occupied (~600 MB or so) with openbox or gnome-panel. What gives? Openbox looks lighter but it certainly isn't any different. Obviously the same daemons etc would run in both environments as they share the same folders. Is gnome-panel as good as openbox then?

    Read the article

  • 'ia32-libs is not installed' while installing Skype on Ubuntu

    - by Vit Kos
    I downloaded skype from official site, but when installing I get this type of error (Reading database ... 100% (Reading database ... 150271 files and directories currently installed.) Unpacking skype (from .../skype-ubuntu_4.0.0.8-1_amd64.deb) ... dpkg: dependency problems prevent configuration of skype: skype depends on ia32-libs; however: Package ia32-libs is not installed. dpkg: error processing skype (--install): dependency problems - leaving unconfigured Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Read about that I need to install ia32-libs. Tried to install them like this sudo apt-get install package-name:i386 But it doesn't find it. Any hint? Thx.

    Read the article

  • Radeon HD 6290 terrible performance on a certified laptop

    - by dac
    I bought Asus K53U laptop, which is Ubuntu certified with pre-installed 11.10. The graphic card is Radeon HD 6290 but 720p playback is terrible. Even page scrolling in Firefox is very laggy. Proprietary drivers are installed by default. How is this possible, why is the laptop Ubuntu certified if the performance is poor? Any solution to this? I just did apt-get autoremove, and after that, this message came out in terminal: Error inserting vesafb (/lib/modules/3.0.0-15-generic/kernel/drivers/video/vesafb.ko): No such device Could that be the problem?

    Read the article

  • Ubuntu 12.10 - VirtualBox not sharing internet with guest system

    - by Fernando Briano
    I went from ArchLinux to Ubuntu on my dev box. I use VirtualBox to test web sites on Windows and IE. I have my Windows 7 VirtualBox image running on Ubuntu's VirtualBox. Back with ArchLinux, internet worked "out of the box" on the Windows boxes. I left the default options on the box's Network Options (NAT). The Windows machine shows as "connected to ethernet" but reports: The dns server isn't responding So I can't access Internet from there. I tried searching for Ubuntu's official docs but they seem pretty outdated. I tried using my old boxes from Arch (which boot normally but have no internet) and creating a new box from Ubuntu itself, but still get the same results. Update: I'm using VirtualBox 4.1.18 from Ubuntu's repository (apt-get install virtualbox).

    Read the article

  • Mapping capslock to control on Mac OS X: works for some things, but not others?

    - by keflavich
    I've mapped my capslock key to control using the Modifier Keys mapping in System Preferences: Keyboard. I've also tried mapping to "right control" instead of "left control" as per http://hints.macworld.com/article.php?story=20060825072451882 using a plist editor. The mapping seems to work in all cases except one: I can't use capslock with left-shift to make key mappings or apparently do anything else. capslock (as control) with right-shift works. I'm primarily testing by using control-tab / control-shift-tab to switch between tabs. Using the on-screen-keyboard viewer, I can get capslock-shift-(just about anything) to work, but not capslock-leftshift-tab. My best guess is that somehow the particular keyboard I'm working on is faulty, but I'm curious whether anyone else can reproduce this or has any ideas.

    Read the article

  • Java keyboard input [on hold]

    - by dØd
    I'm trying to implement a input system that can detect whether a certain key was held or was only pressed briefly. So far I have this: KEY_INTERACTION_TRESHOLD = 400ms //inside a constructor shouldMeasure = true; @Override public void keyPressed(KeyEvent e) { if (shouldMeasure) { startTime = System.currentTimeMillis(); shouldMeasure = false; return; } System.out.println("Button is held down"); e.consume(); } @Override public void keyReleased(KeyEvent e) { if (System.currentTimeMillis() - startTime < KEY_INTERACTION_TRESHOLD) { System.out.println("Button was only pressed briefly"); } startTime = 0; shouldMeasure = true; e.consume(); } Now this works, but the problem is that there is this delay between when I press a key to hold and when the message 'Button is held down' gets displayed. I understand why this delay occurs (for example when you press and hold a letter there will be a similar delay between the first and the second letter printed out), but I would like to somehow avoid it. I'm using only the Java API.

    Read the article

  • How can I get ssh-agent working over ssh and in tmux (on OS X)?

    - by Rich
    I have a private key set up for my github account, the passphrase to which is, I believe, stored in OS X's keychain. I certainly don't have to type it in when I open a terminal window and enter ssh [email protected]. However, when I'm running bash over an ssh session, or locally inside a tmux session, I have to type in the passphrase every single time I attempt to ssh to github. This question suggests that a similar problem exists with screen, but I don't really understand the issue well enough to fix it in tmux. There's also this page which includes a fairly complicated solution, but for zsh. EDIT: In response to @Mikel's answer, from a local terminal I get the following output: [~] $ echo $SSH_AUTH_SOCK /tmp/launch-S4HBD6/Listeners [~] $ ssh-add -l 2048 [my key fingerprint] /Users/richie/.ssh/id_rsa (RSA) [~] $ typeset -p SSH_AUTH_SOCK declare -x SSH_AUTH_SOCK="/tmp/launch-S4HBD6/Listeners" Whereas over ssh or in tmux I get: [~] $ echo $SSH_AUTH_SOCK [~] $ ssh-add -l Could not open a connection to your authentication agent. [~] $ typeset -p SSH_AUTH_SOCK bash: typeset: SSH_AUTH_SOCK: not found echo $SSH_AGENT_PID returns nothing whatever shell I run it from.

    Read the article

  • You do not appear to be using the NVIDIA X driver

    - by Vishal shekhar
    my laptop has nvidia GT540m yesterday i install nvidia-current after updating fromsudo apt-add-repository ppa:ubuntu-x-swat/x-updates then i write sudo nvidia-xconfig and then reboot my desktop visual effect changes and it look good like nvidia is working but still glx i not working and nvidia-setting tells me that You do not appear to be using the NVIDIA X driver my dkms status is nvidia-current, 304.43, 3.2.0-30-generic-pae, i686: installed lspci |grep VGA output : 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [GeForce GT 540M] (rev a1) also i can not login to my admin account but when i login to standard user or guest it works but still NVIDIA X driver is not working can any one suggest something so that NVIDIA X Driver start working as i seen many forum but none worked for me earlier i tried nvidia-173 ,nvidia-current(before x-swat/updates reprository) but none works for me

    Read the article

  • Elastic beanstalk access private git repo

    - by user221676
    I am trying to currently add an ssh key to my elastic beanstalk instances using .ebextensions commands. The keys I have stored are in my application code and I try to copy them to the root .ssh folder so I can access them when doing a git+ssh clone later here is an example of the config file in my .ebextensions folder packages: yum: git: [] container_commands: 01-move-ssh-keys: command: "cp .ssh/* ~root/.ssh/; chmod 400 ~root/.ssh/tca_read_rsa; chmod 400 ~root/.ssh/tca_read_rsa.pub; chmod 644 ~root/.ssh/known_hosts;" 02-add-ssh-keys: command: "ssh-add ~root/.ssh/tca_read_rsa" the problem is that I get is an error when attempting to clone the repo Host key verification failed. I have tried many ways of try to add the host to the known_hosts file but none have worked! The command that is doing the clone is npm install as the repo points to a node module

    Read the article

  • Cinnamon install problem ubuntu 12.04

    - by Kin.
    I was following the How do I install the Cinnamon Desktop?, but when i install, it like this locahost@locahost:~$ sudo apt-get install cinnamon Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: cinnamon : Depends: libgjs0- E: Unable to correct problems, you have held broken packages. How can i install the libgjs0- package?

    Read the article

  • What do you think about gems and eggs? Alternatives?

    - by Juanlu001
    I've read recently some criticism (see 1, 2, 3) about the packaging distribution system of two popular programming languages: Ruby gems and Python eggs. The most important argument stated against them is that they replace the system package manager (in case there is one, as in every Linux distribution), which makes eggs and gems difficult to track, code difficult to patch, and so on. Are actually eggs and gems right? In case not, are there any alternatives to distributing Python or Ruby modules? Should developers focus on taking advantage of package manager (apt-get, pacman, ...) capabilities?

    Read the article

  • Cannot see shared folder in /mnt/hgfs

    - by blasto
    I am trying to share a folder between Lubuntu 13.04 (in VMware player) and Windows 7 64 bit. I followed a tutorial till step 16. I typed a command and saw nothing. I also went into the /mnt/hgfs folder and saw nothing there. How do I fix this ? http://theholmesoffice.com/how-to-share-folders-between-windows-and-ubuntu-using-vmware-player/ Command - dir /mnt/hgfs EXTRAS - By the way, this is how I actually reached step 16. Step 12 - sudo apt-get install hgfsclient Step 14 - If it does not work, then follow this tutorial - http://www.liberiangeek.net/2013/03/how-to-quickly-install-vmware-tools-in-ubuntu-13-04-raring-ringtail/ Step 16 - STUCK !!!

    Read the article

  • After installing ubuntu 12.04 my internet connection has completely disappeared

    - by Tony
    On my PC after installing Ubuntu 12.04 my networks are completely gone. Inside the terminal, after typing in nm-tool I get the following: The program nm-tool is currently not installed. You can install by typing: sudo apt-get install network-manager After I type that in then my password I get this: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: network manager : Depends: iputils-arping but it is not going to be installed E: Unable to correct problems, you have held broken packages I'm a complete novice when it comes to computers so I have no clue.

    Read the article

  • How to install vmware tools?

    - by Tom
    I installed my Ubuntu in vmware, no I need install vmware tools, I got error: Searching for a valid kernel header path... The path "" is not valid. Would you like to change it?[yes] In CentOS, I run the following command to resolve this issue: yum install gcc-c++ yum install kernel-devel yum install kernel-headers yum -y update kernel But I don't know how to do in Ubuntu. Please help. Update I have tried the following command but nothing changed,still got error: Searching for a valid kernel header path... The path "" is not valid. Would you like to change it?[yes] sudo apt-get update sudo-get install build-essential linux-header-$(uname -r) sudo ./vmware-uninstall-tools.pl sudo ./vmware-config-tools.pl sudo ./vmware-install.pl Issue Changed: Run sudo ./vmware-uninstall-tools.pl, and delete the folder of /etc/vmware-tools then, run sudo ./vmware-install.pl Now I can successfully install vmware-tool.After restart, I can see folder of /mnt/hgfs, but can't see my shared folder.

    Read the article

  • Why does tomcat-admin install require adding admin and manager to tomcat-users.xml manually?

    - by J G
    I installed tomcat6 on lucid using apt-get. All working. I installed tomcat-admin. Not working. I amended the /etc/tomcat6/tomcat-users.xml file to uncomment the users and roles (from the default) to be like the following: <role rolename="tomcat"/> <role rolename="role1"/> <user username="tomcat" password="password" roles="tomcat"/> <user username="both" password="password" roles="tomcat,role1"/> <user username="role1" password="password" roles="role1"/> This still didn't work. Then from the following page I added. <role rolename="manager"/> <user username="admin" password="secret" roles="manager"/> then it worked. Why doesn't this occur as part of the install? (Why isn't this in the Ubuntu Manual on Tomcat ?)

    Read the article

  • what does it mean for MalwareBytes to find malicious registry keys but nothing else?

    - by EndangeringSpecies
    I have a machine that is obviously infected, and when I ran MalwareBytes it told me that it found some "malicious" registry keys (surprisingly enough these contained file path to currently non-existent javascript files). But, that's it. Full scan did not uncover any malicious files, or malicious hidden processes in memory. Like, maybe the (hidden?) process that for whatever reason periodically injects keystrokes (hotkeys?) into whatever currently open window. Then on another, not obviously infected, machine it found a "malware.trace" registry key but again no files or processes etc. How does this jive with people's experience with MalwareBytes? Does it usually find registry key symptoms of an infection but nothing else? Or is it a common thing to have no infection but some malicious registry keys in place anyway?

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >