Search Results

Search found 22829 results on 914 pages for 'nautilus script'.

Page 663/914 | < Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >

  • Linux foxboard network monitor

    - by het.oosten
    I want to use a Foxboard a simple network monitor for multiple routers (all routers are connected to the internet). Foxboard is a mini pc with an embedded version of Debian. My idea is to use multiple virtual network devices like this: eth0 192.168.2.10 eth0:1 192.168.3.10 eth0:2 192.168.4.10 I found a nice Python script to ping an external host here (the solution from Ryan Cox): http://stackoverflow.com/questions/316866/ping-a-site-in-python Is it possible to configure Debian to use eth0 when I ping www.site-a.com and eth0:1 when I ping www.site-b.com?

    Read the article

  • Interested in scp recipe for sftp [closed]

    - by GJZ
    You wrote in a reply this Blockquote The problem is that sftp runs as the user's id -- first, the sftp client ssh's into the target host as the given user, then runs sftp-server. Since sftp-server is running as a regular user, it has no way to "give away" a file (change owner of a file). However, if you are able to use scp, and assign a key pair to each user, you can get around this. This involves adding a user's key to root's ~/.ssh/authorized_keys file, with a "command=" parameter to force it to run a script that sanitizes and alters the arguments of the server-side scp program. I've used this technique before to set up an anonymous scp dropbox that allowed anyone to submit a file, and ensure that no one could retrieve submitted files and also prevent overwrites. If you are open to this technique, let me know and I'll update this post with a quick recipe. We are interested in this scp quick recipe for our community services file sharing. Best Regards, Gert Jan Zeilstra

    Read the article

  • outlook iptables configuration [update]

    - by mediaexpert
    I've a Debian mail server, but only the outlook users can't be able to download the emails. I've seen a lot of post about some kind of forwarding port configuration, I've tried some commands, but I don't be able to solve this problem, please help me. [LAST UPDATE] I find a lot of TIME WAIT on ipv6 netstat tcp6 0 0 my.mailserver.it:imap2 200-62-245-188.ip2:17060 TIME_WAIT - below some config files: pop3d I think the problem was here ##NAME: POP3AUTH:1 # # To advertise the SASL capability, per RFC 2449, uncomment the POP3AUTH # variable: # # POP3AUTH="LOGIN" # # If you have configured the CRAM-MD5, CRAM-SHA1 or CRAM-SHA256, set POP3AUTH # to something like this: # # POP3AUTH="LOGIN CRAM-MD5 CRAM-SHA1" POP3AUTH="" ##NAME: POP3AUTH_ORIG:1 # # For use by webadmin POP3AUTH_ORIG="PLAIN LOGIN CRAM-MD5 CRAM-SHA1 CRAM-SHA256" ##NAME: POP3AUTH_TLS:1 # # To also advertise SASL PLAIN if SSL is enabled, uncomment the # POP3AUTH_TLS environment variable: # # POP3AUTH_TLS="LOGIN PLAIN" POP3_TLS_REQUIRED = 0 POP3AUTH_TLS="" ##NAME: POP3AUTH_TLS_ORIG:0 # # For use by webadmin POP3AUTH_TLS_ORIG="LOGIN PLAIN" ##NAME: POP3_PROXY:0 # # Enable proxying. See README.proxy # # For use by webadmin POP3AUTH_TLS_ORIG="LOGIN PLAIN" ##NAME: POP3_PROXY:0 # # Enable proxying. See README.proxy POP3_PROXY=0 ##NAME: PROXY_HOSTNAME:0 # # Override value from gethostname() when checking if a proxy connection is # required. # PROXY_HOSTNAME= ##NAME: PORT:1 ##NAME: PROXY_HOSTNAME:0 # # Override value from gethostname() when checking if a proxy connection is # required. # PROXY_HOSTNAME= ##NAME: PORT:1 # # Port to listen on for connections. The default is port 110. # # Multiple port numbers can be separated by commas. When multiple port # numbers are used it is possibly to select a specific IP address for a # given port as "ip.port". For example, "127.0.0.1.900,192.68.0.1.900" # accepts connections on port 900 on IP addresses 127.0.0.1 and 192.68.0.1 # The ADDRESS setting is a default for ports that do not have a specified # IP address. # Port to listen on for connections. The default is port 110. # # Multiple port numbers can be separated by commas. When multiple port # numbers are used it is possibly to select a specific IP address for a # given port as "ip.port". For example, "127.0.0.1.900,192.68.0.1.900" # accepts connections on port 900 on IP addresses 127.0.0.1 and 192.68.0.1 # The ADDRESS setting is a default for ports that do not have a specified # IP address. PORT=110 ##NAME: ADDRESS:0 # # IP address to listen on. 0 means all IP addresses. ADDRESS=0 ##NAME: TCPDOPTS:0 # ##NAME: ADDRESS:0 # # IP address to listen on. 0 means all IP addresses. ADDRESS=0 ##NAME: TCPDOPTS:0 # # Other couriertcpd(1) options. The following defaults should be fine. # TCPDOPTS="-nodnslookup -noidentlookup" ##NAME: LOGGEROPTS:0 # # courierlogger(1) options. # LOGGEROPTS="-name=pop3d" ##NAME: DEFDOMAIN:0 # # Optional default domain. If the username does not contain the # first character of DEFDOMAIN, then it is appended to the username. # If DEFDOMAIN and DOMAINSEP are both set, then DEFDOMAIN is appended # only if the username does not contain any character from DOMAINSEP. # You can set different default domains based on the the interface IP # address using the -access and -accesslocal options of couriertcpd(1). DEFDOMAIN="@interzone.it" ##NAME: POP3DSTART:0 # # POP3DSTART is not referenced anywhere in the standard Courier programs # or scripts. Rather, this is a convenient flag to be read by your system # startup script in /etc/rc.d, like this: # # . /etc/courier/pop3d DEFDOMAIN="@mydomain.com" ##NAME: POP3DSTART:0 # # POP3DSTART is not referenced anywhere in the standard Courier programs # or scripts. Rather, this is a convenient flag to be read by your system # startup script in /etc/rc.d, like this: # # . /etc/courier/pop3d # case x$POP3DSTART in # x[yY]*) # /usr/lib/courier/pop3d.rc start # ;; # esac # # The default setting is going to be NO, until Courier is shipped by default # with enough platforms so that people get annoyed with having to flip it to # YES every time. # x[yY]*) # /usr/lib/courier/pop3d.rc start # ;; # esac # # The default setting is going to be NO, until Courier is shipped by default # with enough platforms so that people get annoyed with having to flip it to # YES every time. POP3DSTART=YES ##NAME: MAILDIRPATH:0 # # MAILDIRPATH - directory name of the maildir directory. # MAILDIRPATH=.maildir iptables Chain INPUT (policy DROP 20 packets, 1016 bytes) pkts bytes target prot opt in out source destination 60833 16M ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:143 state NEW,ESTABLISHED 18970 971K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp spts:1024:65535 dpt:110 state NEW,ESTABLISHED Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 ACCEPT tcp -- * * 192.168.0.0/24 0.0.0.0/0 tcp dpt:110 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 192.168.1.0/24 0.0.0.0/0 tcp dpt:110 0 0 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:25 0 0 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:110 pop3d.cnf RANDFILE = /usr/lib...pop3d.rand [req] default_bits = 1024 encrypt_key = yes distinguidhed_name = req_dn x509_extensions = cert_type prompt = no [req_dn] C=US ST=NY L= New York O=Courier Mail Server OU=Automatically-generated POP3 SSL key CN=localhost [email protected] [cert_type] nsCertType = server

    Read the article

  • How to show users the reason for a message being bounced or rejected by Postfix?

    - by Ross Bearman
    A user would like to be able to view a web page showing any emails that a Postfix server has either been unable to send, or unable to receive. For example if the user was supposed to receive an email from a third party but it hasn't arrived, they'd be able to check the web page and see a list of emails rejected by Postfix, along with a clear reason as to why. I've been unable to find an existing application that offers this functionality. Does anyone know of any, or is the best way forward to write a script that parses the log and display the results?

    Read the article

  • Let's do the Time Warp again!

    - by Mike Dietrich
    Once you start reading about Daylight Saving Time changes in MyOracleSupport you'll find still a lot of notes explaining this and that and back and forth. But sometimes there seems to be a bit too much information - and lacking clear instructions. Once a customer called that the "Time Zone Spaghetti" after reading MOS notes about DST for several hours ending up with the note where he has begun to read before still not clear what to do now I'm using usually the scripts from MOS Note:977512.1 as you'll just have to exchange the DST version you are upgrading to and it has everything you need to check and adjust the time zone data in the database - for instance after applying the DST V18 patch to your database's homes. As a reminder to myself when traveling I have stored a copy of the script part of that note here - and please note that this is not an official Oracle version. Always read and check the original MOS Note:977512.1 as it may have gotten changed in between and may contain changes or corrections and as it has a lot of more explainationary information than I could cover here. And credit to Gunter Vermeir from Oracle Support, who is the owner of that MOS Note and has compiled all that useful stuff together. DST_prepare.sql DST_adjust.sql

    Read the article

  • Is it possible to dump the names of all the open files in notepad++ to a file?

    - by mark
    So, I dragged and dropped multiple files onto notepad++. The files came from different directories and were selected using different criteria. So, I have many files open in Notepad++. Now I need to have a list of all the open files in another file. Right now, my only option is to script the decisions used to guide me in selecting the files in the first place. Which is probably the best in the long term, but I wonder if there is a quicky one in Notepad++. Some plugin magic or whatever. Suggesting another free editor which has this function is a good option too (not that I am going to ditch notepad++, God forbid)

    Read the article

  • How to setup IP alias on bridged interface in Ubuntu

    - by Anonymouslemming
    How do I setup an IP alias on a bridge (br0) device on Ubuntu ? If I wait for br0 to come up and then do /sbin/ifconfig br0:0 192.168.10.1 netmask 255.255.255.0 then it works fine. If however I add the following to my /etc/network/interfaces file, it does not work and the network fails to start: auto br0:0 iface br0:0 inet static address 192.168.10.1 netmask 255.255.255.0 At the moment, I have a script in /etc/network/if-up.d/bridge_alias that does this as follows: #!/bin/bash if [ "${LOGICAL}" == "br0" ] && [ "${PHASE}" = "post-up" ]; then echo -n "Starting br0:0 ... " /sbin/ifconfig br0:0 192.168.10.2 netmask 255.255.255.0 echo "Done!" fi What is the right way of doing this though, just using the OS network config files ?

    Read the article

  • How do I securely share my server?

    - by Blue
    I have a large dedicated server running Debian and I want to share it with about 6 friends of mine. I know I can simply just use adduser to create user accounts for them, but I want to know if they can, even as a regular user without root permissions, do anything malicious. I know by default they have read permissions for other users in the /home, and can solve that with chmod, but I just want to make sure that there's nothing else they can do. And also, is there any kind of script or program that makes it easier to create and manage shell users on a server?

    Read the article

  • SuexecUserGroup not working in Apache 2.4

    - by James W.
    I have upgraded my PHP from version 5.3 to 5.4 via yum which requires upgrading Apache from version 2.2 to 2.4. After doing configuration, it turns out that the userid and groupid is still using the global user/group which is "apache". <VirtualHost *:80> ServerName example.com ServerAdmin [email protected] DocumentRoot "/path/to/webroot" .... .... <IfModule mod_fcgid.c> SuexecUserGroup user-name group-name <Directory "/path/to/webroot"> Options +ExecCGI AllowOverride All AddHandler fcgid-script .php FcgidWrapper /path/to/webroot/php-fcgi-scripts/php-fcgi-starter .php Order allow,deny Allow from all </Directory> </IfModule> ........ </VirtualHost> /etc/httpd/modules/base.conf: LoadModule suexec_module modules/mod_suexec.so I would appreciate if anyone could advise what was I missed. Thanks.

    Read the article

  • Update a DNS to a for a dynamic IP

    - by zobgib
    I want to use my schools connection as a place to host a small webserver but one problem I have run into is anytime my server reboots I am given a new IP inside the schools range. All of the schools IP are public and therefor I can access my computer directly over WAN just via the IP given in ifconfig. I would like to be able to give my computer a dns which is easy enough when I change the Arecords to match the current IP of my computer. The problem is if my computer ever reboots (my school regularly cycles power at night and over holidays) I am assigned a new IP and have to realize it then update the Arecords This is inconvenient and I figure there must be a better way to keep the DNS records updated either via a script or my own BIND server. That way if there is a power cycle I can still access the server via a Domain Name. If you have any direction to point me in it would be much appreciated. I am running Ubuntu 10.04 if that helps :).

    Read the article

  • running autobench (httperf)

    - by Matthew
    So I ran apt-get install httperf on my system and I can now run httperf. But how can I run 'autobench'? I downloaded the file and unarchived it and if I go in it and run autobench it says -bash command not found I think it's a perl script but if I run perl autobench, it says: root@example:/tmp/autobench-2.1.2# perl autobench Autobench configuration file not found - installing new copy in /root/.autobench.conf cp: cannot stat `/etc/autobench.conf': No such file or directory Installation complete - please rerun autobench Even if I run it again it says the same thing.

    Read the article

  • Squid url rewrites https>>http

    - by bobfran
    I'm exploring some uses with Squid proxy 2.7 and I have seen a good number of examples for url rewrites that take urls such as: http: //somesitename.com and then the rewriter can change the url to: https: //somesitename.com And those examples work great. What I'm wondering though, is if its possible to do the reverse with a squid url rewriter. that is, to go from https: //somesitename.com to http: //somesitename.com ? Simply trying to edit the script file that handles the rewrites doesn't seem to do the trick. So I was wondering if there are some certain things I have to configure squid to do first, if its even possible to do what I am asking. I have my browser manually set up to have squid as a proxy for all requests and I can see https requests showing up in my squid access.log file (via the CONNECT method).

    Read the article

  • How can I make my PHP development environment more efficient?

    - by pixel
    I want to start a home-brew pet project in PHP. I've spent some time in my life developing in PHP and I've always felt it was hard to organize the development environment efficiently. In my previous PHP work, I've used a windows desktop machine and a linux server for development. This configuration had it's advantages: it's easy to configure Apache (and it's modules)/PHP/MySql on a linux box, and, at the time, this configuration was the same like on production server. However, I never successfully set up a debug connection between my Eclipse install and X-debug on server. Transferring files from my local workspace to the server was also very annoying (either ftp or Bazaar script moving files from repository to web root). For my new setup, I'm considering installing everything on my local machine. I'm afraid that it will slow down workstation performance (LAMP + Eclipse), and that compatibility problems will kick-in. What would you recommend? Should I develop using two separate machines? On one? Do you have experience using one of above configurations in your work?

    Read the article

  • Strange PHP output buffering

    - by radek-k
    PHP: header('Content-type: text/plain'); for ($i=0; $i<10; $i++){ echo "$i\r\n"; ob_flush(); flush(); sleep(1); } I tried script above on 2 different servers. Both respond numbers 0...9 in every line. In case of first server each number is received every second. In case of second server there is no output for 10 seconds and entire output is displayed at once. What might be wrong int second case? I tried various uutput control Functions but it didn't help. Set of response headers in both cases is pretty much the same: HTTP/1.1 200 OK Date: Mon, 03 Jan 2011 19:21:21 GMT Server: Apache X-Powered-By: PHP/5.2.14 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/plain

    Read the article

  • Where can I learn about managing domain names for my websites? [closed]

    - by Shahbaz
    [I originally asked this question on serverfault.com, where it was closed as 'out of scope.' Hopefully it is appropriate for this forum] I am a developer who doesn't understand how to effectively manage Internet domain names. Say I registered a name with namecheap and host a website on linode. Now what is an a-record? What is a name server and do I host it with namecheap of linode? Why would I pay amazon when others are free? Does any of this matter in terms of website latency or reliability? I feel like a script kiddy, copying and pasting others' and hoping it works. Is there a book or other resource that explains all this? I know amazon is full of books about DNS, but afaik they are about setting up DNA servers for local networks, not the Internet. p.s. To emphasize, I'm asking for books or long write-ups which explain this to technically competent people, who just haven't had to think about the role of commercial registrars, name servers, commercial hosts, commercial websites and how all parts play together on the real internet (not local networks).

    Read the article

  • Can't boot after compiling 3.1 kernel, can only get to terminal

    - by olssy
    Long story short: I tried compiling kernel 3.1 on Ubuntu 11.10 at the same time I had an update waiting for a reboot. Computer would boot to a black screen and would hang there. Ended up installing 11.10 on top of old install with a Live CD. Now I had a purple screen on bootup but it would end up booting. Realized Grub was the problem and tried some stuff but nothing worked. I ended up trying to install propriety ATI video drivers and since that nothing has worked, no grub menu(purple screen) and when it boots into the kernel it ends up hanging, I can sometimes get a terminal up with alt-fx. I have tried removing the ati drivers with the ati script, purging my fglrx driver, reconfiguring my xconf.org and following any tutorial I can find about fixing a broken graphics driver, but to no avail. I've gotten to a point were it seems that the ati propriety drivers are correctly loaded but it still has no grub boot menu and won't boot into Ubuntu. I've chased down my logs and this line is from kern.log: unity-greeter[3269]: segfault at 0 ip b7245cbbsp bf9d3900 error 4 in libgio-2.0.so.0.3000.0[b71ad000+142000] That line leads me to believe I don'T have the correct libgio-2 librairy on my system but have no idea how to find out what package has the correct version... My xorg.conf has no errors and seems to imply the fglrxdrm module got loaded correctly. Would be a bit complicated pasting the whole file here but if it would help I'll post it. LAstly, running fglrxinfo give me: Error: Unable yo open display (null) Any help or link to another tutorial would be appreciated. Thanks.

    Read the article

  • What are the common techniques to handle user-generated HTML modified differently by different browsers?

    - by Jakie
    I am developing a website updater. The front end uses HTML, CSS and JavaScript, and the backend uses Python. The way it works is that <p/>, <b/> and some other HTML elements can be updated by the user. To enable this, I load the webpage and, with JQuery, convert all those elements to <textarea/> elements. Once they the content of the text area is changed, I apply the change to the original elements and send it to a Python script to store the new content. The problem is that I'm finding that different browsers change the original HTML. How do you get around this issue? What Python libraries do you use? What techniques or application designs do you use to avoid or overcome this issue? The problems I found are: IE removes the quotes around class and id attributes. For example, <img class='abc'/> becomes <img class=abc/>. Firefox removes the backslash from the line breaks: <br \> becomes <br>. Some websites have very specific display technicalities, so an insertion of a simple "\n"(which IE does) can affect the display of a website. Example: changing <img class='headingpic' /><div id="maincontent"> to <img class='headingpic'/>\n <div id="maincontent"> inserts a vertical gap in IE. The things I have unsuccessfully tried to overcome these issues: Using either JQuery or Python to remove all >\n< occurences, <br> etc. But this fails because I get different patterns in IE, sometimes a ·\n, sometimes a \n···. In a Python, parse the new HTML, extract the new text/content, insert it into the old HTML so the elements and format never change, just the content. This is very difficult and seems to be overkill.

    Read the article

  • xcopy files and directory

    - by user1044937
    I have a folder named "C:\Jobs\job#1" , "C:\Jobs\job#2" "C:\Jobs\job#3" etc and a lot of directories and sub-directories under it. I want to get the all the directories under Jobs and xcopy them to C:\backup. Then I want to xcopy all the files under each Job#1, 2 ,3 etc. to C:\backup\job#1\month\\*.* To make it clearer. Source dir = C:\Jobs\job#1\"myfiles&dir" Destination dir = C:\Backup\job#1\month\"myfiles&dir" then do the next folder Source dir = C:\Jobs\job#2\"myfiles&dir" Destination dir = C:\Backup\job#2\month\"myfiles&dir" ...until all folders are back-up. Since the job folder keep increasing, by doing it this way I don't have to add extra code on this script except modify the month. Thank you.

    Read the article

  • Linux - why am i allowed to remove root file?

    - by 0xDEAD BEEF
    Situation as follows: i do su to root, then i create admin file with cat adminfile then i exit from root issuing exit command i can see following adminfile options -rw-r--r-- 1 root root 10 2010-06-16 16:25 adminfile however, after executing rm adminfile it really gets removed -rw-r--r-- 1 root root 10 2010-06-16 16:25 adminfile reinis@reinis-desktop:~/Test/script$ rm adminfile rm: remove write-protected regular file `adminfile'? tada.. file is gone! As i see it - others have only read permision for that file so they shouldnot be able to remove it.. :/

    Read the article

  • Executing a command as apache

    - by Lord Loh.
    This script keeps outputting a 1. and I cannot understand why. <?php passthru("nohup sudo rndc reload sd.example.com",$op); print_r($op); ?> I have also tried the above code without the nohup. I have the following line in my sudoers file apache ALL = NOPASSWD: /usr/sbin/rndc reload sd.example.com Just to test, temporally, I allowed apache a shell, logged in as apache by sudo su apache and successfully managed to execute sudo rndc reload sd.example.com. I do not see any error message in my log files wither. What could I be possibly doing wrong? None of the similar threads have pointed me to anything that solved my problem or debug it.

    Read the article

  • PuTTY automatically supply password

    - by Kyle Cronin
    I have a situation where I need to have PuTTY (or another SSH client for Windows) automatically log into another machine via SSH. I realize that this isn't a good idea security-wise, but unfortunately I'm constrained by the limitations both on the client and the server. The best solution would be to have a shortcut or script on the desktop that, when double clicked, will connect to the server and automatically log in. Can I do this with PuTTY? I am willing to explore public key authentication, but I'm not sure where the PuTTY key resides or how to copy it to the server, as the app starts automatically upon login.

    Read the article

  • Should sanity be a property of a programmer or a program?

    - by toplel32
    I design and implement languages, that can range from object notations to markup languages. In many cases I have considered restrictions in favor of sanity (common knowledge), like in the case of control characters in identifiers. There are two consequences to consider before doing this: It takes extra computation It narrows liberty I'm interested to learn how developers think of decisions like this. As you may know Microsoft C# is very open on the contrary. If you really want to prefix your integer as Long with 'l' instead of 'L' and so risk other developers of confusing '1' and 'l', no problem. If you want to name your variables in non-latin script so they will contrast with C#'s latin keywords, no problem. Or if you want to distribute a string over multiple lines and so break a series of indentation, no problem. It is cheap to ensure consistency with restrictions and this makes it tempting to implement. But in the case of disallowing non-latin characters (concerning the second example), it means a discredit to Unicode, because one would not take full advantage of its capacity.

    Read the article

  • Limit number of concurrent user logins in Windows Server 2008 Active Directory

    - by smhnaji
    Is there the possibility to limit Active Directory users' max concurrent login sessions? I've read many articles and discussions about the solution, but none of them seem to be working. Many had suggested UserLogin script that doesn't work in Windows Server 2008. Some other suggested CConnect that is not good enough. It's also very complicated. Some others have introduced UserLock that should be paid for. It's wondering that Windows Server 2003 DOES have the feature (wile as a third-party), but Windows Server 2008 doesn't have! One of the articles I've read: http://www.edugeek.net/forums/windows-server-2008-r2/61216-multiple-logins.html

    Read the article

  • How do i set a (open_)basedir with php using fastcgi/nginx?

    - by acidzombie24
    Essentially i found out you can limit the folders each user has access to by using php's basedir/open_basedir. I'd like to have each php only access its own files. So i wrote fastcgi_param open_basedir $document_root; in hopes that it would work. It didnt. I googled and only found results saying you cant do it via fastcgi or nginx. Is this true or can i not do it? PS: I -do- spawn php as its own user (rather then www-data) so it doesnt wreak havoc on my nonphp websites. But i still like to prevent one php script on a php site from accessing other directories (if i have a wordpress install on yourface.com its pretty obvious a valid php path is /var/www/yourface/<wordpress scripts>

    Read the article

  • What to do if you find a vulnerability in a competitor's site?

    - by user17610
    While working on a project for my company, I needed to build functionality that allows users to import/export data to/from our competitor's site. While doing this, I discovered a very serious security exploit that could, in short, perform any script on the competitor's website. My natural feeling is to report the issue to them in the spirit of good-will. Exploiting the issue to gain advantage crossed my mind, but I don't want to go down that path. So my question is, would you report a serious vulnerability to your direct competition, in order to help them? Or would you keep your mouth shut? Is there a better way of going about this, perhaps to gain at least some advantage from the fact that I'm helping them by reporting the issue? Update (Clarification): Thanks for all your feedback so far, I appreciate it. Would your answers change if I were to add that the competition in question is a behemoth in the market (hundreds of employees in several continents), and my company only started a few weeks ago (three employees)? It goes without saying, they most definitely will not remember us, and if anything, only realize that their site needs work (which is why we entered this market in the first place). I confess this is one of those moral vs. business toss-ups, but I appreciate all the advice.

    Read the article

< Previous Page | 659 660 661 662 663 664 665 666 667 668 669 670  | Next Page >