Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 624/1523 | < Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >

  • How to execute a shell script on startup?

    - by vijay.shad
    I have create a script to start a server(my first question). Now I want it to run on the system boot and start the defined server. What should I do to get this done? My findings tell me put this file in /etc/init.d location and it will execute when the system will boot. But I am not able to understand how the first argument on the startup will be start? Is this predefined somewhere to use start as $1? If I want to have a case startall that will start all the servers in the script, then what are the options I can manage. My Script is like this: #!/bin/bash case "$1" in start) start ;; stop) stop ;; restart) $0 stop $0 start ;; *) echo "usage: $0 (start|stop|restart)" ;; esac

    Read the article

  • Forward the Wan IP to another Wan IP without changing the source address

    - by user195410
    I have tried this case by using the NAT function in iptables but fail example. PC A IP is 1.1.1.1 (Win7) My Server IP is 2.2.2.2 (CentOS 6.2) target Server B is 3.3.3.3 (Windows server 2003) Flow: PC A WanIP -- My Server A -- Server B (WanIP) My iptables rules: 1. iptables -t nat -A PREROUTING -d 2.2.2.2 -p tcp --dport 80 -j DNAT --to-destination 3.3.3.3:80 2. iptables -t nat -A POSTROUTING -d 2.2.2.2 -j MASQUERADE finally, i can access server B website by enter 2.2.2.2:80 but when i checked the access log at Server B i found it's source address had been changed to src:2.2.2.2 dst:3.3.3.3 please help me to do how to get the real address is src:1.1.1.1 dst:3.3.3.3

    Read the article

  • Is there a max thread per mongrel?

    - by Blankman
    I don't know much about ruby, much less how or what is involved with hosting a ruby on rails web app. BUT, I recall hearing someone saying that they have to run multiple mongrels b/c of a limit of 50 threads? Is this true (or something similiar)? Why does it have this limitation?

    Read the article

  • Properly Hosting Multiple Sites on VDS

    - by Aristotle
    I'm going to be moving about 7-10 websites (5-8 with Databases - MySQL) onto our new Virtual Private Server. I'm curious what the best way to host many sites on a single server is though. Do I create a directory for each site immediately within my root directory, and then point the domain names for each site to http://123.123.123.123/siteDirectory - or is there a more appropriate way to do this? I'm very interested in maintining control over how many concurent connections each site can have at any given time - would I be able to do that on the directory-level, or am I required to limit the concurrent-connections to the VPS itself?

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

  • define software path

    - by shantanuo
    I have an older version already installed. I have upraded the package using setup.py install command. But the path is not correctly set. When I type "s3cmd" is shows the older version of software. # s3cmd s3cmd [options] <command> [arg(s)] version 1.2.6 --help -h --verbose -v --dryrun -n # which s3cmd /usr/local/bin/s3cmd The correct version is in different folder and I will like that to be used whenever I type the command. # /usr/bin/s3cmd Consider using --configure parameter to create one. How do I set path? I have added path to .bash_profile file but it does not work. PATH=$PATH:/usr/bin/s3cmd

    Read the article

  • Editing sudoers file to restrict a user's commands

    - by devin
    Is it possible to edit the sudoers file so a user can use sudo for any command except for a specified one? I reverse is true, I believe, that the sudoers file can be setup so that a user can only execute a given list of commands. EDIT: the commands I really want to take away are halt and reboot... this makes me think there are special system calls for halt and reboot. Can you take system calls away from a user? If not, is it because the unix permission system abstracts over system calls neglecting this?

    Read the article

  • OpenSSH does not accept public key?

    - by Bob
    I've been trying to solve this for a while, but I'm admittedly quite stumped. I just started up a new server and was setting up OpenSSH to use key-based SSH logins, but I've run into quite a dilemma. All the guides are relatively similar, and I was following them closely (despite having done this once before). I triple checked my work to see if I would notice some obvious screw up - but nothing is apparent. As far as I can tell, I haven't done anything wrong (and I've checked very closely). If it's any help, on my end I'm using Cygwin and the server is running Ubuntu 12.04.1 LTS. Anyways, here is the output (I've removed/censored some parts for privacy (primarily anything with my name, website, or its IP address), but I can assure you that nothing is wrong there): $ ssh user@host -v OpenSSH_5.9p1, OpenSSL 0.9.8r 8 Feb 2011 debug1: Connecting to host [ipaddress] port 22. debug1: Connection established. debug1: identity file /home/user/.ssh/id_rsa type 1 debug1: identity file /home/user/.ssh/id_rsa-cert type -1 debug1: identity file /home/user/.ssh/id_dsa type -1 debug1: identity file /home/user/.ssh/id_dsa-cert type -1 debug1: identity file /home/user/.ssh/id_ecdsa type -1 debug1: identity file /home/user/.ssh/id_ecdsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.9 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA 24:68:c3:d8:13:f8:61:94:f2:95:34:d1:e2:6d:e7:d7 debug1: Host 'host' is known and matches the ECDSA host key. debug1: Found key in /home/user/.ssh/known_hosts:2 debug1: ssh_ecdsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /home/user/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /home/user/.ssh/id_dsa debug1: Trying private key: /home/user/.ssh/id_ecdsa debug1: No more authentication methods to try. Permission denied (publickey). What can I do to resolve my problem?

    Read the article

  • Postmaster uses excessive CPU and Disk Writes

    - by wolfcastle
    using PostgreSQL 9.1.2 I'm seeing excessive CPU usage and large amounts of writes to disk from postmaster tasks. This happens even while my application is doing almost nothing (10s of inserts per MINUTE). There are a reasonable number of connections open however. I've been trying to determine what in my application is causing this. I'm pretty newb with postgresql, and haven't gotten anywhere so far. I've turned on some logging options in my config file, and looked at connections in the pg_stat_activity table, but they are all idle. Yet each connection consumes ~ 50% CPU, and is writing ~15M/s to disk (reading nothing). I'm basically using the stock postgresql.conf with very little tweaks. I'd appreciate any advice or pointers on what I can do to track this down. Here is a sample of what top/iotop is showing me: Cpu(s): 18.9%us, 14.4%sy, 0.0%ni, 53.4%id, 11.8%wa, 0.0%hi, 1.5%si, 0.0%st Mem: 32865916k total, 7263720k used, 25602196k free, 575608k buffers Swap: 16777208k total, 0k used, 16777208k free, 4464212k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 17057 postgres 20 0 236m 33m 13m R 45.0 0.1 73:48.78 postmaster 17188 postgres 20 0 219m 15m 11m R 42.3 0.0 61:45.57 postmaster 17963 postgres 20 0 219m 16m 11m R 42.3 0.1 27:15.01 postmaster 17084 postgres 20 0 219m 15m 11m S 41.7 0.0 63:13.64 postmaster 17964 postgres 20 0 219m 17m 12m R 41.7 0.1 27:23.28 postmaster 18688 postgres 20 0 219m 15m 11m R 41.3 0.0 63:46.81 postmaster 17088 postgres 20 0 226m 24m 12m R 41.0 0.1 64:39.63 postmaster 24767 postgres 20 0 219m 17m 12m R 41.0 0.1 24:39.24 postmaster 18660 postgres 20 0 219m 14m 9.9m S 40.7 0.0 60:51.52 postmaster 18664 postgres 20 0 218m 15m 11m S 40.7 0.0 61:39.61 postmaster 17962 postgres 20 0 222m 19m 11m S 40.3 0.1 11:48.79 postmaster 18671 postgres 20 0 219m 14m 9m S 39.4 0.0 60:53.21 postmaster 26168 postgres 20 0 219m 15m 10m S 38.4 0.0 59:04.55 postmaster Total DISK READ: 0.00 B/s | Total DISK WRITE: 195.97 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17962 be/4 postgres 0.00 B/s 14.83 M/s 0.00 % 0.25 % postgres: aggw aggw [local] idle 17084 be/4 postgres 0.00 B/s 15.53 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17963 be/4 postgres 0.00 B/s 15.00 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17188 be/4 postgres 0.00 B/s 14.80 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 17964 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.24 % postgres: aggw aggw [local] idle 18664 be/4 postgres 0.00 B/s 15.13 M/s 0.00 % 0.23 % postgres: aggw aggw [local] idle 17088 be/4 postgres 0.00 B/s 14.71 M/s 0.00 % 0.13 % postgres: aggw aggw [local] idle 18688 be/4 postgres 0.00 B/s 14.72 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 24767 be/4 postgres 0.00 B/s 14.93 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18671 be/4 postgres 0.00 B/s 16.14 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 17057 be/4 postgres 0.00 B/s 13.58 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 26168 be/4 postgres 0.00 B/s 15.50 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle 18660 be/4 postgres 0.00 B/s 15.85 M/s 0.00 % 0.00 % postgres: aggw aggw [local] idle

    Read the article

  • How to monitor traffic on certain ports with ntop

    - by Claudiu
    How to configure ntop so I can get the amount of upload traffic sent through a certain port ? I've added port in ntop/protocol.list, restarted ntop and after some time I've checked Summary - Traffic - TCP/UDP Traffic Port Distribution: Last Minute View, but data from that table is not too relevant. I think there is much more about this ntop that I don't know (configuration, usage).

    Read the article

  • Can I find which script outputs which error?

    - by ibrahim
    There is a script which calls other scripts and they call others... I don't know exactly which scripts are called and how many of them. I only know that some of them are adding iptables rules and I get this error when I call root script. iptables: No chain/target/match by that name. iptables: No chain/target/match by that name. My problem is that I can not find which script outputs this errors. Is there any way or tool to learn that?

    Read the article

  • ubuntu: sending mail with postfix?

    - by ajsie
    i've got some questions about how it works: so ubuntu server comes with postfix installed. if i want my php script to send a mail to lets say [email protected], how does it work? do i have to specify any ip to another MTA (my ISP's MTA?) in postfix's configuration file? and if someone sends back, will it get to my ip? is it postfix that receives it? or has it to do with fetchmail?

    Read the article

  • ISE (Germany) and Techdata Azlan: Exadata win over IBM at Immonet

    - by Javier Puerta
    Immonet, a subsidiary of German media company Axel Springer, provides cross-media real estate marketing via the Internet, newspapers, and other channels. The Immonet.de website is the number two German property portal with approximately 1.8 million unique visitors per month and over than 950,000 current online offerings. Read here how ISE solved with Exadata the performance problems that Immonet was experiencing.

    Read the article

  • Bad Spot to Be In: Playing Catch-up with Mobile Advertising

    - by Mike Stiles
    You probably noticed, there’s a mass migration going on from online desktop/laptop usage to smartphone/tablet usage.  It’s an indicator of how we live our lives in the modern world: always on the go, with no intention of being disconnected while out there. Consequently, paid as it relates to mobile advertising is taking the social spotlight. eMarketer estimated that in 2013, US adults would spend about 2 hours, 21 minutes a day on mobile, not counting talking time. More people in the world own smartphones than own toothbrushes (bad news I suppose if you’re marketing toothpaste). They’re using those mobile devices to access social networks, consuming at least 17% of their mobile time on them. Frankly, you don’t need a deep dive into mobile usage stats to know what’s going on. Just look around you in any store, venue or coffee shop. It’s really obvious…our mobile devices are now where we “are,” so that’s where marketers can increasingly reach us. And it’s a smart place for them to do just that. Mobile devices can be viewed more and more as shopping facilitators. Usually when someone is on mobile, they are not in passive research mode. They are likely standing near a store or in front of a product, using their mobile to seek reassurance that buying that product is the right move. They are the hottest of hot prospects. Consider that 4 out of 5 consumers use smartphones to shop, 52% of Americans use mobile devices for in-store for research, 70% of mobile searches lead to online action inside of an hour, and people that find you on mobile convert at almost 3x the rate as those that find you on desktop or laptop. But what are marketers doing? Enter statistics from Mary Meeker’s latest State of the Internet report. Common sense says you buy advertising where people are spending their eyeball time, right? But while mobile is 20% of media use and rising, the ad spend there is 4%. Conversely, while print usage is at 5% and falling, ad spend there is 19%. We all love nostalgia, but come on. There are reasons marketing dollar migration to mobile has not matched user migration, including the availability of mobile ad products and the ability to measure user response to mobile ads. But interesting things are happening now. First came Facebook’s mobile ad, which let app developers pay to get potential downloads. Then their mobile ad network was announced at F8, allowing marketers to target users across non-Facebook apps while leveraging the wealth of diverse data Facebook has on those users, a big deal since Nielsen has pointed out mobile apps make up 89% of the media time spent on mobile. Twitter has a similar play in motion with their MoPub acquisition. And now mobile deeplinks have arrived, which can take users straight to sub-pages of mobile apps for a faster, more direct shopper/researcher user experience. The sooner the gratification, the smoother and faster the conversion. To be clear, growth in mobile ad spending is well underway. After posting $13.1 billion in 2013, Gartner expects global mobile ad spending to reach $18 billion this year, then go to $41.9 billion by 2017. Cheap smartphones and data plans are spreading worldwide, further fueling the shift to mobile. Mobile usage in India alone should grow 400% by 2018. And, of course, there’s the famous statistic that mobile should overtake desktop Internet usage this year. How can we as marketers mess up this opportunity? Two ways. We could position ourselves in perpetual “catch-up” mode and keep spending ad dollars where the public used to be. And we could annoy mobile users with horrid old-school marketing practices. Two-thirds of users told Forrester they think interruptive in-app ads are more annoying than TV ads. Make sure your brand’s social marketing technology platform is delivering a crystal clear picture of your social connections so the mobile touch point is highly relevant, mobile optimized, and delivering real value and satisfying experiences. Otherwise, all we’ve done is find a new way to be unwanted. @mikestiles @oraclesocialPhoto: Kate Mallatratt, freeimages.com

    Read the article

  • Monitoring Maximum Process Network Activity

    - by user125973
    We have a collocated server on which we run some OpenVZ hosts. Recently, we have had to pay a lot extra we keep exceeding our bandwidth quota. Our quota is 5 Mb/s but we have spike to almost 50. I looked at the graphs and there are some spikes happening at some intervals. I want to know which process is causing this so I need a tool that monitors the processes and gives me the one with the maximum instant traffic (It doesn't matter how much traffic we have as long as we don't exceed the 5Mb/s quota). Does anyone have a recommendation for this? My hosts are CentOS 5 with OpenVZ so I can see all the containter processes from the host, if that helps in any way.

    Read the article

  • Why did I fail to build ctags for vim?

    - by hugemeow
    I got the latest, unreleased version of the ctags source code from the svn repository using svn co https://ctags.svn.sourceforge.net/svnroot/ctags I ran ./configure, which failed with the following error: config.status: creating Makefile config.status: WARNING: 'Makefile.in' seems to ignore the --datarootdir setting config.status: error: cannot find input file: config.h.in [mirror@home ctags-5.7]$ echo $? 1 Then I created an empty file named config.h.in, and now ./configure succeed. configure: creating ./config.status config.status: creating Makefile config.status: WARNING: 'Makefile.in' seems to ignore the --datarootdir setting config.status: creating config.h [mirror@home ctags-5.7]$ echo $? 0 Running make still failed. [mirror@home ctags-5.7]$ make gcc -I. -I. -DHAVE_CONFIG_H -g -O2 -c args.c In file included from args.c:17: /usr/include/stdio.h:88: error: two or more data types in declaration specifiers make: *** [args.o] Error 1 Why did this not work? How do I build ctags from the svn repository?

    Read the article

  • Ubuntu 10.04 install - frozen at splash, no errors

    - by Andrew Bolster
    Has anyone else come across this? After about as much of a fresh install as i can muster without buying new drives, and after walking through the amd64 alternate install with ease, and after a little 'pre-splash' screen where the orange dots under the (very sexy) new ubuntu logo blink away, I'm left with a vista of purple hues and logo plonked in the middle, with the dots not going anywhere. I was at this same position last night at 3 in the morning, left it lying overnight, and nothing had changed, so I'm pretty sure its frozen, but when i go in and inspect /var/log/* in the recovery console, no errors, no complaints, no problems. I'm at my wits end and am just about ready to try anything. If this was on SO I'd be bountying, but if anyone can help you'll just have to cope with my thanks! Additional Details on my blog and my first attempt at asking for help

    Read the article

  • How can I repair my USB drive?

    - by yurko
    USB drive is in read only state and I can't repair it. First of all I tried erase it using dd: root@yurko-laptop:/home/yurko-laptop# ls -l /dev/disk/by-id | grep usb lrwxrwxrwx 1 root root 9 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0 -> ../../sdb lrwxrwxrwx 1 root root 10 ??? 18 23:45 usb-Generic_Flash_Disk_C173828A-0:0-part1 -> ../../sdb1 root@yurko-laptop:/home/yurko-laptop# dd if=/dev/zero of=/dev/sdb dd: ?????? ? «/dev/sdb»: ?? ?????????? ????????? ????? 8257537+0 ??????? ??????? 8257536+0 ??????? ???????? ??????????? 4227858432 ????? (4,2 GB), 942,633 c, 4,5 MB/c After that I wanted to create new filesystem using fdisk: root@yurko-laptop:/home/yurko-laptop# fdisk /dev/sdb You will not be able to write the partition table. WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): p Disk /dev/sdb: 4227 MB, 4227858432 bytes 4 heads, 63 sectors/track, 32768 cylinders Units = cylinders of 252 * 512 = 129024 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 18 32768 4126596 b W95 FAT32 Command (m for help): fdisk showed that the partition still exists and I can't write the partition table. I tried to delete the existing partition: Command (m for help): d Selected partition 1 Command (m for help): w Unable to write /dev/sdb root@yurko-laptop:/home/yurko-laptop# Why am I not be able to write the partition table? Does it mean that some hardware failure occurred? And is it possible to repair the current USB drive? I've tried to use hdparm and it showed that the readonly flag is on: root@yurko-laptop:/home/yurko-laptop# hdparm /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0a 00 00 00 00 26 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 multcount = 0 (off) readonly = 1 (on) readahead = 256 (on) geometry = 1016/131/62, sectors = 8257536, start = 0

    Read the article

  • Disable user_deny.db in Cyrus/imapd

    - by netmano
    We have a cyrus 2.4.12 on Debian, we use packages, rather than building each software ourselves. I am getting the this "log" constantly, a lot of, various users, and 8-10 times per user request: fetching user_deny.db entry for 'user123' I have searched for it, but haven't found a real solution, there were some patches for 2.3.xx, but we don't want ot build it, we prefer to use packages. Is there any solution to disable the user_deny.db at all. We don't need this feature. It wastes the CPU as disk.

    Read the article

  • How do I restart MySQL in monit when page contains specific text?

    - by Tyler
    How do I check if a web page contains the text "Error connecting to database" and if the text exists in the page restart the database? Here's what I have so far but it isn't working: check host website.com with address website.com group database start program = "/usr/bin/service mysql start" stop program = "/usr/bin/service mysql stop" if url http://website.com content == "Error connecting to database" then restart

    Read the article

  • How do I make XTerm not use bold?

    - by mike
    I like using XTerm, I like its default "fixed" font, and I like using terminal colors rather than having a monochromatic terminal. However, XTerm seems to insist on using a bold version of the font whenever it's displaying a bright color: I hate hate hate the bold version of the font, but I like the brightness. The man page seems to suggest that adding "XTerm.VT100.boldMode:false" to my ~/.Xresources would disable this "feature", but it doesn't seem to have any effect. I've had it in there for months, so it's not a rebooting issue. How can I force XTerm to always use the standard, non-bold version of the fixed font, even when it's displaying bright text? Edit: Some have suggested putting "XTerm*boldMode: false" in my ~/.Xresources. That didn't help either. I've confirmed that the changes have taken effect with xrdb, though: $ xrdb -query | grep boldMode XTerm*boldMode: false And if i run xprop and click an xterm, I get "WM_CLASS(STRING) = "xterm", "XTerm"" .. so i'm definitely running real xterms. BTW, this is just a plain-vanilla Ubuntu Intrepid box. If anyone else here is running the same, can you try running: echo -e '#\e[1m#' ...and let me know whether the # on the right has a black pixel in the middle like the one on the left does?

    Read the article

  • Using secure proxies with Google Chrome

    - by cYrus
    Whenever I use a secure proxy with Google Chrome I get ERR_PROXY_CERTIFICATE_INVALID, I tried a lot of different scenarios and versions. The certificate I'm using a self-signed certificate: openssl genrsa -out key.pem 1024 openssl req -new -key key.pem -out request.pem openssl x509 -req -days 30 -in request.pem -signkey key.pem -out certificate.pem Note: this certificate works (with a warning since it's self-signed) when I try to setup a simple HTTPS server. The proxy Then I start a secure proxy on localhost:8080. There are a several ways to accomplish this, I tried: a custom Node.js script; stunnel; node-spdyproxy (OK, this involves SPDY too, but later... the problem is the same); [...] The browser Then I run Google Chrome with: google-chrome --proxy-server=https://localhost:8080 http://superuser.com to load, say, http://superuser.com. The issue All I get is: Error 136 (net::ERR_PROXY_CERTIFICATE_INVALID): Unknown error. in the window, and something like: [13633:13639:1017/182333:ERROR:cert_verify_proc_nss.cc(790)] CERT_PKIXVerifyCert for localhost failed err=-8179 in the console. Note: this is not the big red warning that complains about insecure certificates. Now, I have to admit that I'm quite n00b for what concerns certificates and such, if I'm missing some fundamental points, please let me know.

    Read the article

< Previous Page | 620 621 622 623 624 625 626 627 628 629 630 631  | Next Page >