Search Results

Search found 9417 results on 377 pages for 'auth module'.

Page 272/377 | < Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >

  • Intermittently, IIS7 requests get stuck in WindowsAuthenticationModule

    - by rbeier
    Hi, We're running an IIS7 server hosting several dozen websites. Several of these websites are all part of the same legacy app we've developed. These sites all run the same code and run in the same app pool. Roughly once a month over the past few months, we've found that all requests for this app pool start hanging indefinitely. When this happens, we receive an alert and we recycle the app pool. After that, the sites start working again. This only ever affects this one app pool - never any others on the same server. A couple times, before recycling the pool, I've looked at the currently-executing requests in the worker process. They all show up as executing inside the WindowsAuthenticationModule. Which is strange, because the vast majority of the application does not require authentication. There is a small admin section which uses Windows auth... but all the other requests should be anonymous. Does anyone have any idea as to what might be causing this? There are several unusual things about the way these sites are set up. As I mentioned, they all run the same code - multiple sites point at the same physical directory. The only difference is the host header bindings. I'm not sure why there isn't just one site with all the host headers, but that's how it works. In several of these sites, the same physical directory is mapped at two levels - as the root of the site and again as an application within the site. So if a user goes to http://oursite.com/index.aspx, that maps to c:\files\oursite\index.aspx. If a user goes to http://oursite.com/foo/index.aspx, that also maps to c:\files\oursite\index.aspx. I think there is code which looks at the request URL and handles the two requests differently. This is strange because the same web.config ends up being interpreted as a site config file, and also as an application config file within the site. I don't know if this might be related to the authentication problem. If we can't find the cause, we're thinking of a few workarounds we could try: Move the admin section into a separate site, and give the client a new admin URL. Run that separate site in its own app pool. Then in the web.config shared by all the other sites, remove the WindowsAuthenticationModule. That way there should be no possibility of a hang within the WindowsAuthenticationModule. Try running all these sites in the classic pipeline instead of the integrated pipeline. They were working fine on our old IIS6 server... (If we get desperate) Set up a watchdog script which monitors the sites and auto-recycles the app pool when it detects that requests are getting stuck. What do you think? Thanks for your help, Richard

    Read the article

  • postfix error: open database /var/lib/mailman/data/aliases.db: No such file

    - by Thufir
    In trying to follow the Ubuntu guide for postfix and mailman, I do not understand these directions: This build of mailman runs as list. It must have permission to read /etc/aliases and read and write /var/lib/mailman/data/aliases. Do this with these commands: sudo chown root:list /var/lib/mailman/data/aliases sudo chown root:list /etc/aliases Save and run: sudo newaliases I'm getting this kind of error: root@dur:~# root@dur:~# root@dur:~# telnet localhost 25 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 dur.bounceme.net ESMTP Postfix (Ubuntu) ehlo dur 250-dur.bounceme.net 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN quit 221 2.0.0 Bye Connection closed by foreign host. root@dur:~# root@dur:~# tail /var/log/mail.log Aug 28 01:16:43 dur postfix/master[19444]: terminating on signal 15 Aug 28 01:16:43 dur postfix/postfix-script[19558]: starting the Postfix mail system Aug 28 01:16:43 dur postfix/master[19559]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:16:45 dur postfix/postfix-script[19568]: stopping the Postfix mail system Aug 28 01:16:45 dur postfix/master[19559]: terminating on signal 15 Aug 28 01:16:45 dur postfix/postfix-script[19673]: starting the Postfix mail system Aug 28 01:16:45 dur postfix/master[19674]: daemon started -- version 2.9.1, configuration /etc/postfix Aug 28 01:17:22 dur postfix/smtpd[19709]: error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1] Aug 28 01:18:37 dur postfix/smtpd[19709]: disconnect from localhost[127.0.0.1] root@dur:~# root@dur:~# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases, hash:/var/lib/mailman/data/aliases append_dot_mydomain = no biff = no broken_sasl_auth_clients = yes config_directory = /etc/postfix default_transport = smtp home_mailbox = Maildir/ inet_interfaces = loopback-only mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/conf.d/01-mail-stack-delivery.conf -m "${EXTENSION}" mailbox_size_limit = 0 mailman_destination_recipient_limit = 1 mydestination = dur, dur.bounceme.net, localhost.bounceme.net, localhost myhostname = dur.bounceme.net mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 readme_directory = no recipient_delimiter = + relay_domains = lists.dur.bounceme.net relay_transport = relay relayhost = smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtp_use_tls = yes smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_recipient_restrictions = reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/dovecot-auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_tls_auth_only = yes smtpd_tls_cert_file = /etc/ssl/certs/ssl-mail.pem smtpd_tls_key_file = /etc/ssl/private/ssl-mail.key smtpd_tls_mandatory_ciphers = medium smtpd_tls_mandatory_protocols = SSLv3, TLSv1 smtpd_tls_received_header = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes tls_random_source = dev:/dev/urandom transport_maps = hash:/etc/postfix/transport root@dur:~# root@dur:~# And am wondering what connection might be. I do see that I don't have the requisite files: root@dur:~# root@dur:~# ll /var/lib/mailman/data/aliases ls: cannot access /var/lib/mailman/data/aliases: No such file or directory root@dur:~# At what stage were those aliases created? How can I create them? Is that what's causing the error error: open database /var/lib/mailman/data/aliases.db: No such file or directory Aug 28 01:17:22 dur postfix/smtpd[19709]: connect from localhost[127.0.0.1]?

    Read the article

  • Multiple Haskell cabal-packages in one directory

    - by aleator
    What is the recommended way of having several cabal packages in one directory? Why: I have an old project with many separable modules. Since originally they formed just one program it was, and still is, handy to have them in same directory for easy compiling. Options Just suffer and split everything, including VCS holding the stuff, into different directories? Hack cabal until it is happy with multiple .cabal files in same directory? Make another subdirectory for each module and put .cabal files there along with symlinks to original pieces of code? Something smarter? What?

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • use doctest and logging in python program

    - by Luke
    #!/usr/bin/python2.4 import logging import sys import doctest def foo(x): """ foo (0) 0 """ print ("%d" %(x)) _logger.debug("%d" %(x)) def _test(): doctest.testmod() _logger = logging.getLogger() _logger.setLevel(logging.DEBUG) _formatter = logging.Formatter('%(message)s') _handler = logging.StreamHandler(sys.stdout) _handler.setFormatter(_formatter) _logger.addHandler(_handler) _test() I would like to use logger module for all of my print statements. I have looked at the first 50 top google links for this, and they seem to agree that doctest uses it's own copy of the stdout. If print is used it works if logger is used it logs to the root console. Can someone please demonstrate a working example with a code snippet that will allow me to combine. Note running nose to test doctest will just append the log output at the end of the test, (assuming you set the switches) it does not treat them as a print statement.

    Read the article

  • Dynamics of the using keyword

    - by AngryHacker
    Consider the following code: // module level declaration Socket _client; void ProcessSocket() { _client = GetSocketFromSomewhere(); using (_client) { DoStuff(); // receive and send data Close(); } } void Close() { _client.Close(); _client = null; } Given that that the code calls the Close() method, which closes the _client socket and sets it to null, while still inside the `using' block, what exactly happens behind the scenes? Does the socket really get closed? Are there side effects? P.S. This is using C# 3.0 on the .NET MicroFramework, but I suppose the c#, the language, should function identically. The reason i am asking is that occasionally, very rarely, I run out of sockets (which is a very precious resource on a .NET MF devices).

    Read the article

  • Creating a spider using Scrapy, Spider generation error.

    - by Nacari
    I just downloaded Scrapy (web crawler) on Windows 32 and have just created a new project folder using the "scrapy-ctl.py startproject dmoz" command in dos. I then proceeded to created the first spider using the command: scrapy-ctl.py genspider myspider myspdier-domain.com but it did not work and returns the error: Error running: scrapy-ctl.py genspider, Cannot find project settings module in python path: scrapy_settings. I know I have the path set right (to python26/scripts), but I am having difficulty figuring out what the problem is. I am new to both scrapy and python so there is a good possibility that I have failled to do something important. Also, I have been using eclipse with the Pydev plugin to edit the code if that might cause some problems.

    Read the article

  • Why GPRS modem provides embedded TCP/IP stack

    - by Christian Madsen
    My colleague and I are mining the GPRS MODEM market for a module suitable for use with embedded Linux. During the market scan, we see that several vendors highlight that their MODEMs include an embedded TCP/IP stack. This makes me wonder: when we are using embedded Linux which already contains a TCP/IP stack and connects using PPP, will it make use of the stack included in the GPRS MODEM at all? My current assumption is that the stack is included for use with tiny microcontroller OS that do not supply their own stack. Also some of the MODEMs allow for running small applications IN the MODEM baseband processor which could explain the embedded stack... So: is the TCP/IP stack supplied by the GPRS MODEM superfluous when using it with an HL OS or did I overlook something?

    Read the article

  • How to configure encoding in maven

    - by Ethan Leroy
    When I run maven install on my multi module maven project I always get the following output: [WARNING] File encoding has not been set, using platform encoding UTF-8, i.e. build is platform dependent! So, I googled around a bit, but all I can find is that I have to add <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> to my pom.xml. But it's already there (in the parent pom.xml). Configuring <encoding> for the maven-resources-plugin or the maven-compiler-plugin also doesn't fix it. So what's the problem?

    Read the article

  • How do you: Symfony functional test on a secure app?

    - by Dan Tudor
    Im trying to perform some function tests in symfony 1.4. My application is secure so the tests return 401 response codes not 200 as expected. I've tried creating a context and authentication the user prior to performing the test but to no avail. Any suggestions? Do I need to pass sfContext to the sfTestFunctional? Thanks include(dirname(FILE).'/../../bootstrap/functional.php'); $configuration = ProjectConfiguration::getApplicationConfiguration('backend', 'test', true); $context = sfContext::createInstance($configuration); new sfDatabaseManager($configuration); $loader = new sfPropelData(); $loader-loadData(sfConfig::get('sf_test_dir').'/fixtures'); // load test data $user = sfGuardUserPeer::retrieveByUsername('test'); $context-getUser()-signin($user); $browser = new sfTestFunctional(new sfBrowser()); $browser- get('/')- with('request')-begin()- isParameter('module', 'video')- isParameter('action', 'index')- end()- with('response')-begin()- isStatusCode(200)- //checkElement('body', '!/This is a temporary page/')- end() ;

    Read the article

  • Is it possible to add custom fields to a Drupal taxonomy term?

    - by user278457
    I'd like to add a date field to a drupal taxonomy term, alongside the default "title" and "description" Is there some technique/php/module that lets me do this? Is it possible to do with CCK?? I need to be able to display the new field in the same view presenting the content nodes referencing the term. At the moment, I've added a date field to the content nodes with CCK, and it's displayed by the view. But that's not exactly what I'm going for, I just want to update one date per term.

    Read the article

  • Joomla to Drupal migration problem

    - by Gok Demir
    AFter migrating a joomla 1.5 site to Drupal 6 by using Joomla to Drupal module. While importing I ticked full HTML. Now some of the pages have are annoying codes as shown below: Normal 0 21 false false false TR X-NONE X-NONE MicrosoftInternetExplorer4 <![endif]--><!--[if gte mso 9]> DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267"> UnhideWhenUsed="false" QFormat="true" Name="Normal"/> UnhideWhenUsed="false" QFormat="true" Name="heading 1"/> I think they are copied and pasted from MS Word. How could I fix these? Thanks

    Read the article

  • BIND split-view DNS config problem

    - by organicveggie
    We have two DNS servers: one external server controlled by our ISP and one internal server controlled by us. I'd like internal requests for foo.example.com to map to 192.168.100.5 and external requests continue to map to 1.2.3.4, so I'm trying to configure a view in bind. Unfortunately, bind fails when I attempt to reload the configuration. I'm sure I'm missing something simple, but I can't figure out what it is. options { directory "/var/cache/bind"; forwarders { 8.8.8.8; 8.8.4.4; }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; zone "." { type hint; file "/etc/bind/db.root"; }; zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; view "internal" { zone "example.com" { type master; notify no; file "/etc/bind/db.example.com"; }; }; zone "example.corp" { type master; file "/etc/bind/db.example.corp"; }; zone "100.168.192.in-addr.arpa" { type master; notify no; file "/etc/bind/db.192"; }; I have excluded the entries in the view for allow-recursion and recursion in an attempt to simplify the configuration. If I remove the view and just load the example.com zone directly, it works fine. Any advice on what I might be missing?

    Read the article

  • e-shop implementation: Status for Orders?

    - by Guillermo
    Hello Again my fellow programmers out there, I'm designing and programming from scratch a online shop. It has a Module to manage "Orders" that are recieved via the frontend. I'm needing to have a status to know whats happening with an order in s certain moment, let's say the statuses are: Pending Payment Confirmed - Awaiting shipment Shipped Cancelled My question is a simple one, but is very important to decide on the store design, and is: What would you do so store this status: Would you create a column for it in the Orders table or would you just "calculate" the status of each order depending if payments has been recieved or shipments has been made for every order? (except I suppose for a is_cancelled column) What would be the best approach to model this kind of problem? PD: I even wish in the future to have these statuses configurable buy other clientes using the same software..

    Read the article

  • Intellij Idea 9, what folders to check into (or not check into) source control?

    - by Benju
    Our team has just moved from Netbeans to Intellij 9 Ultimate and need to know what files/folders should typically be excluded from source control as they are not "workstation portable" ie: they reference paths that only exist on one user's computer. As far as I can tell Intellij wants to ignore most of the .idea project including .idea/artifacts/* .idea/inspectionProfiles/* .idea/copyright/* .idea/dataSources.ids .idea/dataSources.xml .idea/workspace.xml However it seems to want to check in the .iml files that exist in each module's root directory. I originally checked in the entire .idea directory via the command line which is obviously not aware of what "should" be ignored by Idea. Is the entire .idea directory typically ignored?

    Read the article

  • missing elements from pcap?

    - by Matthew
    When I check the attributes available to the module pcap, I expect to see something like 'DLT_AIRONET_HEADER', 'DLT_APPLE_IP_OVER_IEEE1394', 'DLT_ARCNET', 'DLT_ARCNET_LINUX', 'DLT_ATM_CLIP', 'DLT_ATM_RFC1483', 'DLT_AURORA', 'DLT_AX25', 'DLT_CHAOS', 'DLT_CISCO_IOS', 'DLT_C_HDLC', 'DLT_DOCSIS', 'DLT_ECONET', 'DLT_EN10MB', 'DLT_EN3MB', 'DLT_ENC', 'DLT_FDDI', 'DLT_FRELAY', 'DLT_IEEE802', 'DLT_IEEE802_11', 'DLT_IEEE802_11_RADIO', 'DLT_IEEE802_11_RADIO_AVS', 'DLT_IPFILTER', 'DLT_IP_OVER_FC', 'DLT_JUNIPER_ATM1', 'DLT_JUNIPER_ATM2', 'DLT_JUNIPER_ES', 'DLT_JUNIPER_GGSN', 'DLT_JUNIPER_MFR', 'DLT_JUNIPER_MLFR', 'DLT_JUNIPER_MLPPP', 'DLT_JUNIPER_MONITOR', 'DLT_JUNIPER_SERVICES', 'DLT_LINUX_IRDA', 'DLT_LINUX_SLL', 'DLT_LOOP', 'DLT_LTALK', 'DLT_NULL', 'DLT_PFLOG', 'DLT_PPP', 'DLT_PPP_BSDOS', 'DLT_PPP_ETHER', 'DLT_PPP_SERIAL', 'DLT_PRISM_HEADER', 'DLT_PRONET', 'DLT_RAW', 'DLT_RIO', 'DLT_SLIP', 'DLT_SLIP_BSDOS', 'DLT_SUNATM', 'DLT_SYMANTEC_FIREWALL', 'DLT_TZSP', 'builtins', 'doc', 'file', 'name', '_newclass', '_object', '_pcap', '_swig_getattr', '_swig_setattr', 'aton', 'dltname', 'dltvalue', 'findalldevs', 'lookupdev', 'lookupnet', 'ntoa', 'pcapObject', 'pcapObjectPtr'] With note on pcapObject. However, all I get when running dir(pcap) is ['DLT_ARCNET', 'DLT_AX25', 'DLT_CHAOS', 'DLT_EN10MB', 'DLT_EN3MB', 'DLT_FDDI', 'DLT_IEEE802', 'DLT_LINUX_SLL', 'DLT_LOOP', 'DLT_NULL', 'DLT_PFLOG', 'DLT_PFSYNC', 'DLT_PPP', 'DLT_PRONET', 'DLT_RAW', 'DLT_SLIP', 'author', 'builtins', 'copyright', 'doc', 'file', 'license', 'name', 'url', 'version', 'bpf', 'dltoff', 'ex_name', 'lookupdev', 'pcap', 'sys'] Noting the lack of pcapObject. Why is this? What could cause this?

    Read the article

  • l2tp server always 'sent [CCP ResetReq id=0x3]' when got compressed data request

    - by wilbur
    I have built a xl2tpd/ipsec server on my ubuntu 12.04.3, and I managed to make a l2tp vpn connection to the xl2tpd server from my android phone. The xl2tpd log said xl2tpd[10828]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[10828]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[10828]: setsockopt recvref[22]: Protocol not available xl2tpd[10828]: This binary does not support kernel L2TP. xl2tpd[10828]: xl2tpd version xl2tpd-1.2.8 started on atime.me PID:10828 xl2tpd[10828]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[10828]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[10828]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[10828]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[10828]: Listening on IP address 0.0.0.0, port 1701 xl2tpd[10828]: control_finish: Peer requested tunnel 39154 twice, ignoring second one. xl2tpd[10828]: Connection established to 117.136.8.59, 43149. Local: 25339, Remote: 39154 (ref=0/0). LNS session is 'default' However I cannot access the web in my browser. The pppd log said rcvd [Compressed data] 00 1d 82 c4 7c 04 d8 09 ... sent [CCP ResetReq id=0x7] I have googled a lot and found that this was mostly caused by a mppe decompression error. I have disabled BSD-Compress compression with nobsdcomp in /etc/ppp/xl2tpd-options but it did not work. I used openswan-2.6.33 and xl2tpd-1.2.8 which were built from source. And my configurations: /etc/ipsec.conf version 2.0 config setup nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn L2TP-PSK-NAT rightsubnet=vhost:%priv also=L2TP-PSK-noNAT conn L2TP-PSK-noNAT authby=secret pfs=no auto=add keyingtries=3 rekey=no ikelifetime=8h keylife=1h type=transport left=106.186.121.214 leftprotoport=17/1701 right=%any rightprotoport=17/%any /etc/xl2tpd/xl2tpd.conf [global] ipsec saref = yes [lns default] local ip = 10.10.11.1 ip range = 10.10.11.2-10.10.11.245 refuse chap = yes refuse pap = yes require authentication = yes ppp debug = yes pppoptfile = /etc/ppp/xl2tpd-options length bit = yes /etc/ppp/xl2tpd-options require-mschap-v2 ms-dns 8.8.8.8 ms-dns 8.8.4.4 asyncmap 0 auth crtscts lock hide-password modem name l2tpd proxyarp lcp-echo-interval 30 lcp-echo-failure 4 debug nobsdcomp Any suggestions? Thanks in advance.

    Read the article

  • From a Perl test file, how do I check the contents of a file?

    - by justintime
    I want to test a script I have written in Perl and specifically check what output it writes to file. I wrote it some time ago and don't want to modify it to the extent of turning it into a module but would like to regression test it before adding some small functional changes. So far I have use Test::Command tests => 10; exit_is_num($cmd, 0); .... But the command produces some files and I want to check those files are the same as I expect (either equal or match some regexp). Any suggestions

    Read the article

  • How can I use an Ant foreach iteration with values from a file?

    - by Egon Willighagen
    In our Ant build environment, I have to do the same task for a number of items. The AntContrib foreach task is useful for that. However, the list is in a parameter, where I actually have the list in a file. How can I iterate over items in a file in an foreach-like way in Ant? Something like (pseudo-code): <foreach target="compile-module" listFromFile="$fileWithModules"/> I'm happy to write a custom Task, and welcome any suggestion on possible solutions.

    Read the article

  • Lighttpd not cleanly restarting (address already in use)

    - by NilObject
    When doing a dist-upgrade recently, my lighttpd-1.4.19 install on Ubuntu 8.0.4 has begun failing to restart or reload properly with the /etc/init.d/lighttpd restart command. ~$ sudo /etc/init.d/lighttpd restart * Stopping web server lighttpd ...done. * Starting web server lighttpd 2009-06-13 04:06:36: (network.c.300) can't bind to port: 80 Address already in use ...fail! The same error occurs when I do a reload. The way I get around it is to kill lighttpd and then issue the start command, but it seems like I shouldn't have to do that :) I've looked at my config files, and can't spot any immediate errors. Does anyone have any ideas what can be causing this error? This seems to be the latest version as of writing this question that is available via the apt-get route. My config file is: # Debian lighttpd configuration file # ############ Options you really have to take care of #################### ## modules to load # mod_access, mod_accesslog and mod_alias are loaded by default # all other module should only be loaded if neccesary # - saves some time # - saves memory server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", "mod_fastcgi", "mod_rewrite", "mod_redirect", ) ## a static document-root, for virtual-hosting take look at the ## server.virtual-* options server.document-root = "/var/www/" ## where to send error-messages to server.errorlog = "/var/log/lighttpd/error.log" fastcgi.server = (".php" => (( "bin-path" => "/usr/bin/php5-cgi", "socket" => "/tmp/php.socket" ))) ## files to check for if .../ is requested index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) ## Use the "Content-Type" extended attribute to obtain mime type if possible # mimetype.use-xattr = "enable" #### accesslog module accesslog.filename = "/var/log/lighttpd/access.log" ## deny access the file-extensions # # ~ is for backupfiles from vi, emacs, joe, ... # .inc is often used for code includes which should in general not be part # of the document-root url.access-deny = ( "~", ".inc" ) ## # which extensions should not be handle via static-file transfer # # .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) mimetype.assign = ( ".pdf" => "application/pdf", ".sig" => "application/pgp-signature", ".spl" => "application/futuresplash", ".class" => "application/octet-stream", ".ps" => "application/postscript", ".torrent" => "application/x-bittorrent", ".dvi" => "application/x-dvi", ".gz" => "application/x-gzip", ".pac" => "application/x-ns-proxy-autoconfig", ".swf" => "application/x-shockwave-flash", ".tar.gz" => "application/x-tgz", ".tgz" => "application/x-tgz", ".tar" => "application/x-tar", ".zip" => "application/zip", ".mp3" => "audio/mpeg", ".m3u" => "audio/x-mpegurl", ".wma" => "audio/x-ms-wma", ".wax" => "audio/x-ms-wax", ".ogg" => "audio/x-wav", ".wav" => "audio/x-wav", ".gif" => "image/gif", ".jpg" => "image/jpeg", ".jpeg" => "image/jpeg", ".png" => "image/png", ".xbm" => "image/x-xbitmap", ".xpm" => "image/x-xpixmap", ".xwd" => "image/x-xwindowdump", ".css" => "text/css", ".html" => "text/html", ".htm" => "text/html", ".js" => "text/javascript", ".asc" => "text/plain", ".c" => "text/plain", ".conf" => "text/plain", ".text" => "text/plain", ".txt" => "text/plain", ".dtd" => "text/xml", ".xml" => "text/xml", ".rss" => "application/rss+xml", ".mpeg" => "video/mpeg", ".mpg" => "video/mpeg", ".mov" => "video/quicktime", ".qt" => "video/quicktime", ".avi" => "video/x-msvideo", ".asf" => "video/x-ms-asf", ".asx" => "video/x-ms-asf", ".wmv" => "video/x-ms-wmv", ".bz2" => "application/x-bzip", ".tbz" => "application/x-bzip-compressed-tar", ".tar.bz2" => "application/x-bzip-compressed-tar" ) include_shell "/usr/share/lighttpd/include-conf-enabled.pl" My /etc/init.d/lighttpd script is (untouched from installation): #!/bin/sh ### BEGIN INIT INFO # Provides: lighttpd # Required-Start: networking # Required-Stop: networking # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start the lighttpd web server. ### END INIT INFO PATH=/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/sbin/lighttpd NAME=lighttpd DESC="web server" PIDFILE=/var/run/$NAME.pid SCRIPTNAME=/etc/init.d/$NAME ENV="env -i LANG=C PATH=/usr/local/bin:/usr/bin:/bin" SSD="/sbin/start-stop-daemon" DAEMON_OPTS="-f /etc/lighttpd/lighttpd.conf" test -x $DAEMON || exit 0 set -e # be sure there is a /var/run/lighttpd, even with tmpfs mkdir -p /var/run/lighttpd > /dev/null 2> /dev/null chown www-data:www-data /var/run/lighttpd chmod 0750 /var/run/lighttpd . /lib/lsb/init-functions case "$1" in start) log_daemon_msg "Starting $DESC" $NAME if ! $ENV $SSD --start --quiet\ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 1 else log_end_msg 0 fi ;; stop) log_daemon_msg "Stopping $DESC" $NAME if $SSD --quiet --stop --oknodo --retry 30\ --pidfile $PIDFILE --exec $DAEMON; then rm -f $PIDFILE log_end_msg 0 else log_end_msg 1 fi ;; reload) log_daemon_msg "Reloading $DESC configuration" $NAME if $SSD --stop --signal 2 --oknodo --retry 30\ --quiet --pidfile $PIDFILE --exec $DAEMON; then if $ENV $SSD --start --quiet \ --pidfile $PIDFILE --exec $DAEMON -- $DAEMON_OPTS ; then log_end_msg 0 else log_end_msg 1 fi else log_end_msg 1 fi ;; restart|force-reload) $0 stop [ -r $PIDFILE ] && while pidof lighttpd |\ grep -q `cat $PIDFILE 2>/dev/null` 2>/dev/null ; do sleep 1; done $0 start ;; *) echo "Usage: $SCRIPTNAME {start|stop|restart|reload|force-reload}" >&2 exit 1 ;; esac exit 0

    Read the article

  • Can Cython compile to an EXE?

    - by ThantiK
    I know what Cythons purpose is. It's to write compilable C extensions in a Python-like language in order to produce speedups in your code. What I would like to know (and can't seem to find using my google-fu) is if Cython can somehow compile into an executable format since it already seems to break python code down into C. I already use Py2Exe, which is just a packager, but am interested in using this to compile down to something that is a little harder to unpack (Anything packed using Py2EXE can basically just be extracted using 7zip which I do not want) It seems if this is not possible my next alternative would just be to compile all my code and load it as a module and then package that using py2exe at least getting most of my code into compiled form, right?

    Read the article

  • Defining a SPI in Clojure

    - by Joe Holloway
    I'm looking for an idiomatic way(s) to define an interface in Clojure that can be implemented by an external "service provider". My application would locate and instantiate the service provider module at runtime and delegate certain responsibilities to it. Let's say, for example, that I'm implementing a RPC mechanism and I want to allow a custom middleware to be injected at configuration time. This middleware could pre-process the message, discard messages, wrap the message handler with logging, etc. I know several ways to do this if I fall back to Java reflection, but feel that implementing it in Clojure would help my understanding. (Note, I'm using SPI in a general sense here, not specifically referring to the way it's defined in the JAR file specification) Thanks

    Read the article

  • Zend currency custom format like "$ 1,234.56 USD"

    - by Jorre
    I'm using the zend currency module to manage currencies in a web app. I can't figure out how to create a custom format for my currencies, since there are no examples on the documentation pages: http://framework.zend.com/manual/en/zend.currency.options.html From what I read there, I could use the format parameter to set a format, but I can't find a way how. Does anyone have a good code example for this problem? currently I do the following: $currency->setFormat(array (display' => Zend_Currency::USE_SYMBOL)); That works to display only the symbol, but I'm also interested in putting an extra space after or before the symbol and to display currencies like this: "$ 1,234.56 USD" "€ 1.234,56 EUR"

    Read the article

  • Svn - get the list of all repos on a server so I can svnsync

    - by egarcia
    I'm attempting to create a backup of my client's existing svn repositories, which is publicly available over http. If possible, I'd like to be able to make new repositories automatically, from any computer, without having to give console access to the server to external parties (i.e. the users could do a ls on my svn repo dir) My problem is that I need to know the list of svn repositories on the server - it isn't a fixed list, since the user will add new repositories over time. I'm able to list the repositories on an html page via Apache's mod_dav_svn module, using the SVNListParentPath On directive. I got this page: http://svn.ohwr.org/ My question is: what is the easiest way to obtain a usable list of such repositories? I'll need to parse that list in order to make syncs, probably using shell commands. Must I parse the HTML with shell commands, or is there a better way to get that list?

    Read the article

  • Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more i

    - by pooyakhamooshi
    I have developed an application using Entity Framework, SQL Server 2000, VS 2008 and Enterprise Library. It works absolutely fine locally but when I deploy the project to our test environment, I am getting the following error: "Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information." Stack trace: at System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) at System.Reflection.Assembly.GetTypes() at System.Data.Metadata.Edm.ObjectItemCollection.AssemblyCacheEntry.LoadTypesFromAssembly(LoadingContext context) at System.Data.Metadata.Edm.ObjectItemCollection.AssemblyCacheEntry.InternalLoadAssemblyFromCache(LoadingContext context) at System.Data.Metadata.Edm.ObjectItemCollection.AssemblyCacheEntry.LoadAssemblyFromCache(Assembly assembly, Boolean loadReferencedAssemblies, Dictionary2 knownAssemblies, Dictionary2& typesInLoading, List`1& errors) at System.Data.Metadata.Edm.ObjectItemCollection.LoadAssemblyFromCache(ObjectItemCollection objectItemCollection, Assembly assembly, Boolean loadReferencedAssemblies) at System.Data.Metadata.Edm.ObjectItemCollection.LoadAssemblyForType(Type type) at System.Data.Metadata.Edm.MetadataWorkspace.LoadAssemblyForType(Type type, Assembly callingAssembly) at System.Data.Objects.ObjectContext.CreateQueryT Entity Framework seems to have issue, any clue how to fix it?

    Read the article

< Previous Page | 268 269 270 271 272 273 274 275 276 277 278 279  | Next Page >