Search Results

Search found 40644 results on 1626 pages for 'content script'.

Page 494/1626 | < Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >

  • iptables captive portal remove user

    - by Burgos
    I followed this guide: http://aryo.info/labs/captive-portal-using-php-and-iptables.html I am implementing captive portal using iptables. I've setup web server and iptables on linux router, and everything is working as it should. I can allow user to access internet with sudo iptables -I internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN and I can remove access with sudo iptables -D internet -t mangle -m mac --mac-source USER_MAC_ADDRESS -j RETURN However, on removal, user can still open last viewed page as many times he wants (if he restart his Ethernet adapter, future connections will be closed). On blog page I found a script /usr/sbin/conntrack -L \ |grep $1 \ |grep ESTAB \ |grep 'dport=80' \ |awk \ "{ system(\"conntrack -D --orig-src $1 --orig-dst \" \ substr(\$6,5) \" -p tcp --orig-port-src \" substr(\$7,7) \" \ --orig-port-dst 80\"); }" Which should remove their "redirection" connection track, as it is written, but when I execute that script, nothing happens - user still have access to that page. When I execute /usr/sbin/conntrack -L | grep USER_IP after executing script I am having nothing returned, so my questions: Is there anything else that can help me clean these track? Obviously - I can't reset nor mine, nor users network adapter.

    Read the article

  • My URL has been identified as a phishing site

    - by user2118559
    Some months before ordered VPS at Ramnode According to tutorial (ZPanelCP on CentOS 6.4) http://www.zvps.co.uk/zpanelcp/centos-6 Installed CentOS and ZPanel) Today received email We are requesting that you secure and investigate the phishing website identified below. This URL has been identified as a phishing site and is currently involved in identity theft activities. URL: hxxp://111.11.111.111/www.connet-itunes.fr/iTunesConnect.woasp/ //IP is modified (not real) This site is being used to display false or spoofed content in an apparent effort to steal personal and financial information. This matter is URGENT. We believe that individuals are being falsely directed to this page and may be persuaded into divulging personal information to a criminal, if the content is not immediately disabled. Trying to understand. Some hacker hacked VPS, placed some file (?) with content that redirects to www.connet-itunes.fr/iTunesConnect.woasp? Then questions 1) how can I find the file? Where it may be located? url is URL: hxxp://111.11.111.111/ IP address, not domain name 2) What to do to protect VPS (with CentOS)? Any tutorial? Where may be security problem? I mean may be someone faced something similar....

    Read the article

  • Can't access one directory via HTTPS + public FQDN

    - by Justin James
    Hello - I have the strangest IIS error that I've ever seen in my life. I have an application/directory on an IIS server, that throws an error 500 when accessing ANY of the content in it, including HTML documents, when accessed via HTTPS AND the machines FQDN. When I access it with "localhost" it works fine. When I added a bogus entry for the NIC's IP in the hosts file, it worked fine. When I access it with the machines name and HTTP it works fine. Here's a chart (the machine's name is "lofn.titaniumcrowbar.com"): http - lofn.titaniumcrowbar.com: works https - lofn.titaniumcrowbar.com: broken https - localhost: works https - temp.titaniumcrowbar.com (put into hosts file): works I set up tracing, and I got some useless information: "The I/O operation has been aborted because of either a thread exit or an application request. (0x800703e3)" This would make sense, except this happens when pulling up static content. While the directory may be an "application", the content is all static in it. Any/all suggestions, no matter how strange, are VERY appreciated. Thanks! J.Ja

    Read the article

  • fwbuilder/iptables manually scripted + autogenerated rules at startup?

    - by Jakobud
    Fedora 11 Our previous IT-guy setup iptable rules on our firewall in a way that is confusing me and he didn't document any of it. I was hoping someone could help me make some sense of it. The iptables service is obviously starting at startup, but the /etc/sysconfig/iptables file was untouched (default values). I found in /etc/rc.local he was doing this: # We have multiple ISP connections on our network. # The following is about 50+ rules to route incoming and outgoing # information. For example, certain internal hosts are specified here # to use ISP A connection while everyone else on the network uses # ISP B connection when access the internet. ip rule add from 99.99.99.99 table Whatever_0 ip rule add from 99.99.99.98 table Whatever_0 ip rule add from 99.99.99.97 table Whatever_0 ip rule add from 99.99.99.96 table Whatever_0 ip rule add from 99.99.99.95 table Whatever_0 ip rule add from 192.168.1.103 table ISB_A ip rule add from 192.168.1.105 table ISB_A ip route add 192.168.0.0/24 dev eth0 table ISB_B # etc... and then near the end of the file, AFTER all the ip rules he just declared, he has this: /root/fw/firewall-rules.fw He's executing the firewall rules file that was auto-generated by fwbuilder. Some questions Why is he declaring all these ip rules in rc.local instead of declaring them in fwbuilder like all the other rules? Any advantage or necessity to this? Or is this just a poorly organized way to implement firewall rules? Why is he declaring ip rules BEFORE executing the fwbuilder script? I would assume that one of the first things the fwbuilder script does it get rid of any existing rules before declaring all the new ones. Am I wrong about this? If that was the case, the fwbuilder script would basically just delete all the ip rules that were defined in rc.local. Does this make any sense? Why is he executing all this stuff at startup in rc.local instead of just using iptables-save to keep the firewall settings at /etc/sysconfig/iptables that will get implemented at runtime?

    Read the article

  • Determine if the "yes" is necessary when doing an SCP

    - by glowcoder
    I'm writing a Groovy script to do an SCP. Note that I haven't ran it yet, because the rest of it isn't finished. Now, if you're doing an scp for the first time, have to authenticate the fingerprint. Future times, you don't. My current solution is, because I get 3 tries for the password, and I really only need 1 (it's not like the script will mistype the password... if it's wrong, it's wrong!) is to pipe in "yes" as the first password attempt. This way, it will accept the fingerprint if necessary, and use the correct password as the first attempt. If it didn't need it, it puts yes as the first attempt and the correct as the second. However, I feel this is not a very robust solution, and I know if I were a customer I would not like seeing "incorrect password" in my output. Especially if it fails for another reason, it would be an incredibly annoying misnomer. What follows is the appropriate section of the script in question. I am open to any tactics that involve using scp (or accomplishing the file transfer) in a different way. I just want to get the job done. I'm even open to shell scripting, although I'm not the best at it. def command = [] command.add('scp') command.add(srcusername + '@' + srcrepo + ':' + srcpath) command.add(tarusername + '@' + tarrepo + ':' + tarpath) def process = command.execute() process.consumeOutput(out) process << "yes" << LS << tarpassword << LS process << "yes" << LS << srcpassword << LS process.waitfor() Thanks so much, glowcoder

    Read the article

  • How to convert Markdown files to Dokuwiki, on a PC

    - by Clare Macrae
    I'm looking for a tool or script to convert Markdown files to Dokuwiki format, that will run on a PC. This is so that I can use MarkdownPad on a PC to create initial drafts of documents, and then convert them to Dokuwiki format, to upload to a Dokuwiki installation that I have no control over. (This means that the Markdown plugin is no use to me.) I could spend time writing a Python script to do the conversion myself, but I'd like to avoid spending time on this, if such a thing exists already. The Markdown tags I'd like to have supported/converted are: Heading levels 1 - 5 Bold, italic, underline, fixed width font Numbered and unnumbered lists Hyperlinks Horizontal rules Does such a tool exist, or is there a good starting point available? Things I've found and considered I initially thought that txt2tags would be helpful, but although it can write both markdown and Dokuwiki, it is very tied to its own specific input format I've also seen Markdown2Dokuwiki, and although I'd certainly be willing to use a sed script, even on a PC, this only supports a tiny, tiny part of Markdown's syntax. python-markdown2 also sounded promising, but it only writes out HTML. pandoc - but it doesn't support Dokuwiki output MultiMarkdown - does not appear to support Dokuwiki output

    Read the article

  • Calling Excel from PHP 5 through COM fails on Windows 7 when Apache started through Task Planner

    - by Stefan Pantke
    I currently write an application, which controls Excel through COM: The app creates a COM-based Excel instance, opens some XLS files and reads their contents. Scenario I On Windows 7, I start Apache and mySQL using xmapp-control with system administrator rights. All works as expected. The PHP-based controller script interacts with Excel as expected. Scenario II A problem appears, if I start Apache and mySQL as 'background jobs'. Here is how: I created two jobs using Windows 7 Task Planner. One runs apache_start.bat, the other runs mysql_start.bat. Both tasks run as SYSTEM with elevated privileges when Windows 7 boots. Apache and mySQL work as expected. Specifically, Apache serves HTTP request from clients and PHP is able to talk to mySQL. When I call the PHP controller, which calls and interacts with Excel using COM, I do receive an error. The error seems to come from Excel [not COM itself] and reads like this: Excel can't read the XLS-file Excel failed to save the file due to an ill-name worksheet Interestingly, during the first run of the PHP-based controller script, it takes a few seconds to render the error message. Each subsequent run immediately renders the error message. Windows system logs didn't show a single problem report entry. Note, that the PHP program and the Apache instance didn't change - except the way Apache was started. At least the PHP controller script is perfectly able to read the file-system, since it provides the pathes to the XLS-file through scandir() of a certain directory. Concurrency issues can't be the cause of the problem. A single instance of the specific PHP controller interacts with Excel. Question Could someone provide details, why this happens?

    Read the article

  • How do I find my missing songs from last.fm?

    - by duality_
    My disk failed with all my music with it, lots of them. But luckily, I scrobbled every song to last.fm. I am looking for a way to scan my disk for my songs and check last.fm and tell me which songs are missing from my disk that are present on last.fm. So to recap: I would need to log into my last.fm account and compile a list of all the songs I have scrobbled and then scan my computer for missing songs. Is there a program or script that does this? I don't mind it being a shell script even. Edit: I know this is possible because I came close to this a little time ago. I created a PHP script (web page) that got all my songs through Last.fm API and then went through my files on disk and read their id3 tag. I got very close: the program showed missing songs, but there were many small issues (id3 reading was buggy, tags had different data, etc.) that required more programming time that I didn't have.

    Read the article

  • How to display escaped characters in tmux status bar

    - by walrus
    i am running tmux from a tty on an embedded linux device. (NOT a terminal emulator) because the screen is rather small, i want to add some "icons" to the tmux status bar. to achieve this, i have simply created a font with the appropriate glyphs for things like battery, or wifi. i can load the font, and display the characters with calls that use an escape to the line drawing characters like so: echo -e "\xe\234\xf" \xe escapes me into line drawing character mode, \234 is my created character, and \xf returns me to normal character mode so my terminal doesnt start getting goofy. this works perfectly if i enter the command at the terminal whether tmux is started or not. the issue arises if i then try to use it in my ~/.tmux.conf file for the status bar. i currently have a line like this: set -g status-right "#(echo -e "\xe\234\xf") #(/script/to/output/powerlevel) this simply outputs \xe\234\xf powerlevel this goes the same if i try printf over echo. this is the output i would expect to get on the terminal if i made the call without passing -e to echo, or without enclosing the statement with quotes. i then decided to wrap the calls to the echo or printf in a shell script. again, the script works when called from the terminal, but not in tmux's status bar. now i get the unprintable character "?" instead of my icon, like this: ? powerlevel this is what i would expect if i did not use the line drawing escapes previously mentioned above, or if i tried to copy and paste the character as text using tmux. in addition, the calling of these character scripts screws up the rest of my status-right, as the clock has about 6 digits for minutes when it is called (though it correctly only updates two of them). how can i make tmux respect the escape characters? any help or insight is greatly appreciated.

    Read the article

  • Writing scripts that work with my emails

    - by queueoverflow
    I currently use Thunderbird as my email client and it has some filters, but that seems to be all I can program in it. On several occasions, I heard people talk about their automated email workflow. One example: When I do not get a reply to an email the script will send a “nag” email asking why I did not get a response yet. Or another one: I get so much mail that I cannot read them all. After a week, unread email is put on hold and the sender gets a “if it was important, reply to this email and it will be set to un-hold” email. The script then takes the answer and move it to back into the important folder. I read about FiltaQuilla which seems nice, but it does not seem to be the kind of programming that I am looking for. How can I write general purpose scripts like those? Do I need to write my own Python IMAP/SMTP client (if that is even possible) to to this or can I script it it, say JavaScript, in Thunderbird?

    Read the article

  • Configure IIS to pass-through CGI output without any conditioning

    - by Daniel Watrous
    I'm building a web service on Windows 2008 R2 with IIS 7.5 and Python 2.5. Right now I have the Handler Mappings and everything else setup just fine, Except that IIS is modifying what it gets back from the CGI script before sending it along the the client. Here's an example: I wrote the following CGI script: # hello.py print "Status: 400 Bad Request" print "Content-Type: text/html" print print "Error Message" According to the HTTP spec this should be fine and a Status of 400 should allow for a description of the error message in the body of the response. When the server response actually comes back to me I get the following: Status: 400 Bad Request Date: Fri, 11 Feb 2011 17:58:30 GMT X-Powered-By: ASP.NET Connection: close Content-Length: 11 Server: Microsoft-IIS/7.5 Content-Type: text/html Bad Request I've seen on this forum and others where I can change or eliminate the X-Powered-By header element, but I would like IIS to leave it alone altogether. I'm not sure why it takes my response, deletes "Error Message" from the body and replaces it with "Bad Request" and then adds all that other junk in. Is there some way to tell IIS to just send the response along without making any changes at all?

    Read the article

  • Apache serving empty gzip with assets produced by Rails Asset Pipeline

    - by PizzaPill
    I followed the steps described on the blogpost The Asset Pipeline, from development to production and tweaked them to my environment. The two important files are: /etc/apache/site-available/example.com <VirtualHost *:80> ServerName example.com ServerAlias www.example.com DocumentRoot "/var/www/sites/example.com/current/public" ErrorLog "/var/log/apache2/example.com-error_log" CustomLog "/var/log/apache2/example.com-access_log" common <Directory "/var/www/sites/example.com/current/public"> Options All AllowOverride All Order allow,deny Allow from all </Directory> <Directory "/var/www/sites/example.com/current/public/assets"> AllowOverride All </Directory> <LocationMatch "^/assets/.*$"> Header unset Last-Modified Header unset ETag FileETag none ExpiresActive On ExpiresDefault "access plus 1 year" </LocationMatch> RewriteEngine On # Remove the www RewriteCond %{HTTP_HOST} ^www.example.com$ [NC] RewriteRule ^(.*)$ http://example.com/$1 [R=301,L] </VirtualHost> /var/www/sites/example.com/shared/assets/.htaccess RewriteEngine on RewriteCond %{HTTP:Accept-Encoding} \b(x-)?gzip\b RewriteCond %{REQUEST_FILENAME}.gz -s RewriteRule ^(.+) $1.gz [L] <FilesMatch \.css\.gz$> ForceType text/css Header set Content-Encoding gzip </FilesMatch> <FilesMatch \.js\.gz$> ForceType text/javascript Header set Content-Encoding gzip </FilesMatch> But apache seems to send empty gzip files because the testsite looses all styles and firebug doesnt find any content for the css files. Altough if I call the assets-path directly I get some gibberish that looks like binary data. If I move the htaccess-file everything is back to normal. How could I find out where/what went wrong or do you have any suggestions what error I made? > apache2 -v System: Server version: Apache/2.2.14 (Ubuntu) Server built: Mar 5 2012 16:42:17 > uname -a Linux node0 2.6.18-028stab094.3 #1 SMP Thu Sep 22 12:47:37 MSD 2011 x86_64 GNU/Linux

    Read the article

  • How can I pause console output in rxvt?

    - by Javid Jamae
    I'm running rxvt in Cygwin on a Windows box. This is how I invoke it: rxvt -sr -sl 2500 -sb -geometry 90x30 -tn rxvt -fn "Lucida Console-14" -e /usr/bin/bash --login -i Anyone know how to pause the console output in rxvt? I can use Ctrl-S / Ctrl-Q to pause / un-pause, but this won't work if a script is already running and spewing output to stdout. Highlighting the terminal window with the mouse doesn't seem to work like with other consoles such as the standard Cygwin console, or the Windows command prompt console. Some sort of scroll lock would be nice, but I can't seem to find any way to do this. I know I could just pipe my output to a file, but I want a way to pause the output for something that I didn't expect to explode with console output. Basically I want to scroll back while its running without it constantly moving me to the bottom of the output buffer as it updates more data to stdout. I don't particularly care if the solution given actually pauses the script (like when you highlight the mouse in the Windows Command window), or just scroll locks and let's me scroll while its still running the underlying script, though I'd like to know how to do both if its possible.

    Read the article

  • Why Wireshark does not recognize this HTTP response?

    - by Alois Mahdal
    I have a trivial CGI script that outputs simple text content. It's written in Perl and using CGI module and it specifies only the most basic headers: print $q->header( -type => 'text/plain', -Content_length => $length, ); print $stuff; There's no apparent issue with functionality, but I'm confused about the fact that Wireshark does not recognize the HTTP response as HTTP--it's marked as TCP. Here is request and response: GET /cgi-bin/memfile/memfile.pl?mbytes=1 HTTP/1.1 Host: 10.6.130.38 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:11.0) Gecko/20100101 Firefox/11.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: cs,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip, deflate Connection: keep-alive HTTP/1.1 200 OK Date: Thu, 05 Apr 2012 18:52:23 GMT Server: Apache/2.2.15 (Win32) mod_ssl/2.2.15 OpenSSL/0.9.8m Content-length: 1048616 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/plain; charset=ISO-8859-1 XXXXXXXX... And here is the packet overview (Full packet is here on pastebin) No. Time Source srcp Destination dstp Protocol Info tcp.stream abstime 5 0.112749 10.6.130.38 80 10.6.130.53 48072 TCP [TCP segment of a reassembled PDU] 0 20:52:23.228063 Frame 5: 1514 bytes on wire (12112 bits), 1514 bytes captured (12112 bits) Ethernet II, Src: Dell_97:29:ac (00:1e:4f:97:29:ac), Dst: Dell_3b:fe:70 (00:24:e8:3b:fe:70) Internet Protocol Version 4, Src: 10.6.130.38 (10.6.130.38), Dst: 10.6.130.53 (10.6.130.53) Transmission Control Protocol, Src Port: http (80), Dst Port: 48072 (48072), Seq: 1, Ack: 330, Len: 1460 Now when I see this in Wireshark: there's usual TCP handshake then the GET request shown as HTTP with preview then the next packet contains the response, but is not marked as an HTTP response--just a generic "[TCP segment of a reassembled PDU]", and is not caught by "http.response" filter. Can somebody explain why Wireshark does not recognize it? Is there something wrong with the response?

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • How to create hash or yml from top level attributes values of node?

    - by Sarah Haskins
    I have a chef recipe where I want to take all of the attributes under node['cfn']['environment'] and write them to a yml file. I could do something like this (it works fine): content = { "environment_class" => node['cfn']['environment']['environment_class'], "node_id" => node['cfn']['environment']['node_id'], "reporting_prefix" => node['cfn']['environment']['reporting_prefix'], "cfn_signal_url" => node['cfn']['environment']['signal_url'] } yml_string = YAML::dump(content) file "/etc/configuration/environment/platform.yml" do mode 0644 action :create content "#{yml_string}" end But I don't like that I have to explicitly list out the names of the attributes. If later I add a new attributes it would be nice if it automatically was included in the written out yml file. So I tried something like this: yml_string = node['cfn']['environment'].to_yaml But because the node is actually a Mash, I get a platform.yml file like this (it contains a lot of unexpected nesting that I don't want): --- !ruby/object:Chef::Node::Attribute normal: tags: [] cfn: environment: &25793640 reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 ... But what I want is this: ---- reporting_prefix: Platform2 signal_url: https://cloudformation-waitcondition-us-east-1.s3.amazonaws.com/... environment_class: Dev node_id: i-908adf9 How can I achieve the desired yml output w/o explicitly listing the attributes by name?

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Auto-rotate rotated images with mogrify

    - by Frank Presencia Fandos
    Some of my images have been taken rotated but kept this data. The problem is that, when using mogrify to convert them from JPG to png, that data seems to dissapear. For showing this problem, I think the best is to show the script and an screenshot. Script with the code. Put it in a text file, give it execution permission, double click, run (from terminal if you wish) and wait a while. All the JPGs in that folder will be converted to png. #! /bin/bash echo "Converting JPG to png. Please don't close this window." mogrify -alpha on -format png *.JPG mogrify -alpha on -format -alpha on png *.jpg It works great and adds an alpha channel. This is personally useful when I edit them later, not to add the channel individually. Now the screenshot that illustrates the problem: As you can see, the original ones' (JPGs) preview is right, the modified preview is wrong, the Shotwell rendering is right and the GIMP edit is wrong and didn't even say the image was rotated, as it uses to do with other images. How can I edit my script to preserve the orientation?

    Read the article

  • Configuring wsgi for a simple Python based site

    - by jbbarnes
    I have an Ubuntu 10.04 server that already has apache and wsgi working. I also have a python script that works just fine using the make_server command: if __name__ == '__main__': from wsgiref.simple_server import make_server srv = make_server('', 8080, display_status) srv.serve_forever() Now I would like to have the page always active without having to run the script manually. I looked at what Moin is doing. I found these lines in apache2.conf: WSGIScriptAlias /wiki /usr/local/share/moin/moin.wsgi WSGIDaemonProcess moin user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup moin And moin.wsgi is as listed: import sys, os sys.path.insert(0, '/usr/local/share/moin') from MoinMoin.web.serving import make_application application = make_application(shared=True) QUESTION: Can I create a similar section in apache2.conf pointing to another wsgi file? Like this: WSGIScriptAlias /status /mypath/status.wsgi WSGIDaemonProcess status user=www-data group=www-data processes=5 threads=10 maximum-requests=1000 umask=0007 WSGIProcessGroup status And if so, what is required to convert my simple_server script into a daemonized process? Most of the information I find about wsgi is related to using it with frameworks like Django. I haven't found a simple howto detailing how to make this work. Thanks.

    Read the article

  • Comparing 2 (or 3 Files If Possible) "Line By Line"

    - by PythEch
    I want to find out the differences of 2 (or 3 files if possible) line by line. Diff utils can do this, however it gives inaccurate results. Because, 2 files have exact number of lines which is "134". But diff gives me "Added Lines" and "Removed Lines". However this is wrong, they have exact the same number of lines, there is no added or removed lines. The text files which I want to find differences of them, have only numbers written, maybe that's why that algortihm fails. I couldn't find any option to prevent that, however I may be wrong, I mean there should be an option for that, but again, I couldn't find. This is what I get (5am.txt vs 6am.txt, there is a huge problem): This is what I want (6am.txt vs 7am.txt, still has problems): But, first the first image still has this problem, at the last lines. Edit: After I figured out that there is no utility to do this, I handled myself. I almost did the same thing as what RedGrittyBrick have done. This script imitates diff utility so I (or you) can use it with diff2html. To use it with diff2html, just change line diff_stdout = os.popen("diff %s" % string.join(argv[1:]), "r") to diff_stdout = os.popen("script.py %s" % string.join(argv[1:]), "r") and name this script whatever you want: import sys f1=open(sys.argv[1],"r") f1_read=f1.readlines() f1.close() f2=open(sys.argv[2],"r") f2_read=f2.readlines() f2.close() changed={} first_c = "" for n in range(len(f1_read)): if f1_read[n]!=f2_read[n]: if first_c == "": first_c=n+1 changed[first_c]=n+1 else: first_c="" #Let's imitate diff-utils... for (x, y) in changed.items(): print "%d,%dc%d,%d" % (x,y,x,y) for i in range(x,y+1): sys.stdout.write("< %s" % f1_read[i-1]) print "---" for i in range(x,y+1): sys.stdout.write("> %s" % f2_read[i-1]) Final results:

    Read the article

  • Secure data from a server to a workstation using jumper hosts

    - by apalsson
    Hello. I have a WWW-server, my problem is that the content is sensitive and should not be accessible for people without proper credentials. How can I improve the ease of use but still maintain security following scenario; The Server is accessed through a "jumper host", i.e. the client connects to the jumper using VPN-connection and uses RemoteDesktop to access the jumper. From the jumper he uses RemoteDesktop again to access the Server. Finally on the Server the user can access content using a WWW-browser. All the way from the VPN-client to the WWW-browser requires authentication using a SmartCard-token. This seems quite secure to me. Content only gets mirrored on the RemoteDesktop between Server and jumper, no cached files to worry about. Connection between jumper and client is protected using VPN(ssl), so no eavesdropping. But it is quite cumbersome for the clients with many steps and connections to open. :( So, how can I improve the user experience accessing my server without compromising security? Thanks.

    Read the article

  • Windows 7 scheduled task returns 0x2

    - by demmith
    I have identical scheduled tasks running in Windows XP Pro and Windows 7. The XP Pro one runs fine, the Windows 7 one always returns 0x2 (which means, "The system cannot find the file specified"; however, executing from the command line is no problem) in the Last Run Result column of the Task Scheduler UI. The scheduled task executes a .bat file daily. The .bat file contains a call to execute a Perl script. As I stated in the previous paragraph, it executes under XP without any trouble but under Windows 7, no dice. The task under Windows 7 is set to "run whether the user is logged on or not." In this case it is me, I am the only user of the system. It is also set to "Run with highest privileges." And it is not hidden. The .bat file executes perfectly well from the command line - it calls the Perl script as expected and the Perl script does its thing. I have searched far and wide looking for an appropriate answer to this issue. So far I have found nothing. What the devil is going on with this Win7 scheduled task? I am ready to pull my hair out.

    Read the article

  • Changing a set-cookie header using mod_rewrite/mod_proxy

    - by olrehm
    I have a bunch of cgi scripts, which are served using HTTPS. They can only be reached on the intranet, not from the outside. They set a cookie with the attribute 'Secure', so that it can only be send via HTTPS. There is also a reverse proxy to one of these scripts, unfortunately using plain HTTP. When a response comes in from my cgi-script with a secure cookie, it is not being passed on via HTTP (after all, that is what that attribute is for). I need however, an exception to this rule. Is it possible to use mod_rewrite/mod_proxy or something similar, to change the set-cookie header in the response coming from my cgi script and remove the Secure, such that the cookie can be passed back to the user using the unsafe HTTP connection? I understand that this defeats the purpose of the Secure in the first place, but I need this as a temporary work around. I have searched the web and found how to add a set-cookie header using mod_rewrite, and I have also found how to retrieve the value of a cookie coming from the client in a cookie header. What I have not yet found is how to extract the set-cookie header received in the response of a script I am proxying for. Is that possible? How would I do that? Ole

    Read the article

  • Graphite not running

    - by River
    I'm currently trying to install graphite 0.9.9 on a gentoo box using these instructions from the graphite wiki. Essentially, it fronts graphite using apache and mod_wsgi. Everything seems to have gone well, except that apache / the graphite webapp never seem to return a response to the web browser (the browser continuously waits to load the page). I've turned on the graphite debug info, but the only message in the log files is this, repeated over and over again in info.log (with the pid always changing): Thu Feb 23 01:59:38 2012 :: graphite.wsgi - pid 4810 - reloading search index These instructions have worked for me before to set up graphite on an Ubuntu machine. I suspect that mod_wsgi is dying, but I have confirmed that mod_wsgi works fine when not serving the graphite webapp. This is what my graphite.conf vhost file looks like: WSGISocketPrefix /etc/httpd/wsgi/ <VirtualHost *:80> ServerName # Server name DocumentRoot "/opt/graphite/webapp" ErrorLog /opt/graphite/storage/log/webapp/error.log CustomLog /opt/graphite/storage/log/webapp/access.log common # I've found that an equal number of processes & threads tends # to show the best performance for Graphite (ymmv). WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120 WSGIProcessGroup graphite WSGIApplicationGroup %{GLOBAL} WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL} WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi Alias /content/ /opt/graphite/webapp/content/ <Location "/content/"> SetHandler None </Location> # XXX In order for the django admin site media to work you # must change @DJANGO_ROOT@ to be the path to your django # installation, which is probably something like: # /usr/lib/python2.6/site-packages/django Alias /media/ "/usr/lib64/python2.6/site-packages/django/contrib/admin/media/" <Location "/media/"> SetHandler None </Location> # The graphite.wsgi file has to be accessible by apache. It won't # be visible to clients because of the DocumentRoot though. <Directory /opt/graphite/conf/> Order deny,allow Allow from all </Directory> </VirtualHost>

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

< Previous Page | 490 491 492 493 494 495 496 497 498 499 500 501  | Next Page >