Search Results

Search found 10527 results on 422 pages for 'ccnet config'.

Page 350/422 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • Git Diff with Beyond Compare

    - by Avanst
    I have succeeded in getting git to start Beyond Compare 3 as a diff tool however, when I do a diff, the file I am comparing against is not being loaded. Only the latest version of the file is loaded and nothing else, so there is nothing in the right pane of Beyond Compare. I am running git 1.6.3.1 with Cygwin with Beyond Compare 3. I have set up beyond compare as they suggest in the support part of their website with a script like such: #!/bin/sh # diff is called by git with 7 parameters: # path old-file old-hex old-mode new-file new-hex new-mode "path_to_bc3_executable" "$2" "$5" | cat Has anyone else encountered this problem and know a solution to this? Edit: I have followed the suggestions by VonC but I am still having exactly the same problem as before. I am kinda new to Git so perhaps I am not using the diff correctly. For example, I am trying to see the diff on a file with a command like such: git diff main.css Beyond Compare will then open and only display my current main.css in the left pane, there is nothing in the right pane. I would like the see my current main.css in the left pane compared to the HEAD, basically what I have last committed. My git-diff-wrapper.sh looks like this: #!/bin/sh # diff is called by git with 7 parameters: # path old-file old-hex old-mode new-file new-hex new-mode "c:/Program Files/Beyond Compare 3/BCompare.exe" "$2" "$5" | cat My git config looks like this for Diff: [diff] external = c:/cygwin/bin/git-diff-wrapper.sh

    Read the article

  • Why Spark viewengine renders unnecessary (or unexpected) quotes?

    - by Arnis L.
    If i add pageBaseType="Spark.Web.Mvc.SparkView" in my web.config (necessary to fix intellisense), somehow it does not render links (probably not only) correctly anymore. This is how it's supposed to look like (and does, if page base type is not specified)= This is how it looks when base type is specified= Chrome source viewer shows identical page source code for both cases= <body> <div class="content"> <div class="navigation"> <a href="/Employee/List">Employees</a> <a href="/Product/List">Products</a> <a href="/Store/List">Stores</a> <div class="navigation_title"> Navigation</div> </div> <div class="main"> <div class="content"> <h2>Employees</h2>Nothing found... &lt;a href=&quot;/Employee/Create&quot;&gt;Create&lt;/a&gt; </div> </div> </div> </body> Developer tools does not= So - why my link gets htmlencoded (if that's what happens)? If it's default behavior, then how to render raw html? Using latest Spark version, rebuilt with Asp.Net Mvc2 RC assemblies.

    Read the article

  • unique log file with log4net

    - by Luca Romagnoli
    hi i'm using log4net for logging my website. Every day a new file is created like "filename.log24-06-2009" this is the code in the web.config file: <log4net> <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender"> <file value="App_Data\Missioni.log" /> <appendToFile value="true" /> <rollingStyle value="Composite" /> <!--<datePattern value="yyyy-MM-dd" />--> <maxSizeRollBackups value="5" /> <maximumFileSize value="5MB" /> <layout type="log4net.Layout.PatternLayout"> <header value="[Header]&#xA;" /> <footer value="[Footer]&#xA;" /> <conversionPattern value="%date [%thread] %-5level %logger - %message%newline" /> </layout> </appender> <root> <level value="DEBUG" /> <appender-ref ref="RollingLogFileAppender" /> </root> </log4net> How can i do for use a unique log file? thanks

    Read the article

  • Servlet requests are executed sequentially for no apparent reason in Glassfish v3

    - by Fabien Benoit
    Hi, I'm using Glassfish 3 Web profile and can't get http workers to execute concurrently requests on a servlet. This is how i observed the problem. I've made a very simple servlet, that writes the current thread name to the standard output and sleep for 10 seconds : protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(Thread.currentThread().getName()); try { Thread.sleep(10000); // 10 sec } catch (InterruptedException ex) {} } } And when i'm running several simultaneous requests, I clearly see in the logs that the requests are sequentially executed (one trace every 10 seconds). INFO: http-thread-pool-8080-(2) (10 seconds later...) INFO: http-thread-pool-8080-(1) (10 seconds later...) INFO: http-thread-pool-8080-(2) etc. All my GF settings are untouched - it's the out-of-the-box config (the default thread pool is 2 threads min, 5 max if I recall properly). ...I really don't understand why the sleep() block all the others worker threads. Any insight would be greatly appreciated ! Thanks, Fabien

    Read the article

  • How is unautenticated site navigation handled in ASP.NET?

    - by Code Sherpa
    Hi. I am wondering how to do the following... I have a registration system. When the user successfully registers, he is then led down a series of data gathering pages (for his profile) and then, finally, ends on his profile's home page where he can start to use the site. All this happens without ever logging into the system so, he is unauthenticated and unconfirmed. My question is, how does this happen? How can I allow my user to be unauthenticated (and unconfirmed, but this I understand) and use all aspects of the Web site? The way I have things set up right now, my code should be doing this: case CreateProfileStatus.Success: //FormsAuthentication.SetAuthCookie(userName, false); Response.Redirect("NextPage.aspx", false); break; but, I am being redirected to the login page after registration which is not the result I want. This is what the relevant nodes in my web.config look like: <authentication mode="Forms"> <forms name=".AuthCookie" loginUrl="default.aspx" protection="All"/> </authentication> <authorization> <deny users="?"/> <allow roles="Administrators" /> </authorization> <anonymousIdentification enabled="true" cookieName=".ASPXANONYMOUS" cookieTimeout="100000" cookiePath="/" cookieRequireSSL="false" cookieSlidingExpiration="true" cookieProtection="Encryption" cookieless="UseCookies" domain="" /> When the user logs out after the registration and initial interaction with the site he will be required to log in upon return. At this point he must be authenticated but does not need to be confirmed for a period of time. Eventually, he will be reminded. So, how is this done? Thanks in advance.

    Read the article

  • Convert C# Silverlight App To AZURE CLOUD Platform?!?!

    - by Goober
    The Scenario I've been following Brad Abrams Silverlight tutorial on his blog.... I have tried following Brads "How to deploy your app to the Cloud" tutorial however i'm struggling with it, even though it is in the same context as the first tutorial.... The Question Is the application structure essentially the same as the original "non-cloud based version"!? If not, which parts are different? (I get that there is a Cloud Service project added to the solution) - but what else?! Connection String Issue In my "Non-Cloud based application", I make use of the ADO.Net Entity Framework to communicate with my database. The connection string in my web.config file looks like: <add name="InmZenEntities" connectionString="metadata=res://*/InmZenModel.csdl|res://*/InmZenModel.ssdl|res://*/InmZenModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=CHASEDIGITALWS3;Initial Catalog=InmarsatZenith;Integrated Security=True;MultipleActiveResultSets=True&quot;" providerName="System.Data.EntityClient" /></connectionStrings> However However the connection string that I get from SQL AZURE looks like: Server=tcp:k12ioy1rsi.ctp.database.windows.net;Database=master;User ID=simongilbert;Password=myPassword;Trusted_Connection=False; So how do I go about merging the two when I move the "non-cloud based application" to THE CLOUD?! Any help regarding converting a silverlight application to a cloud service and deploying it would be greatly appreciated

    Read the article

  • Perl cron job stays running

    - by Dylan
    I'm currently using a cron job to have a Perl script that tells my Arduino to cycle my aquaponics system and all is well, except the Perl script doesn't die as intended. Here is my cron job: */15 * * * * /home/dburke/scripts/hal/bin/main.pl cycle And below is my Perl script: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $pid = fork(); my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { if ($args eq "cycle") { my $stop = 0; sleep(1); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); while ($stop == 0) { sleep(2); } die "Done."; } else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } } Why wouldn't it die from the statement die "Done.";? It runs fine from the command line and also interprets the 'cycle' argument fine. When it runs in cron it runs fine, however, the process never dies and while each process doesn't continue to cycle the system it does seem to be looping in some way due to the fact that it ups my system load very quickly. If you'd like more information, just ask. EDIT: I have changed to code to: #!/usr/bin/perl -w # Sample Perl script to transmit number # to Arduino then listen for the Arduino # to echo it back use strict; use Device::SerialPort; use Switch; use Time::HiRes qw ( alarm ); $|++; # Set up the serial port # 19200, 81N on the USB ftdi driver my $device = '/dev/arduino0'; # Tomoc has to use a different tty for testing #$device = '/dev/ttyS0'; my $port = new Device::SerialPort ($device) or die('Unable to open connection to device');; $port->databits(8); $port->baudrate(19200); $port->parity("none"); $port->stopbits(1); my $lastChoice = ' '; my $signalOut; my $args = shift(@ARGV); # Parent must wait for child to exit before exiting itself on CTRL+C if ($args eq "cycle") { open (LOG, '>>log.txt'); print LOG "Cycle started.\n"; my $stop = 0; sleep(2); $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; $stop = 1; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; print LOG "Alarm is being set.\n"; alarm (420); print LOG "Alarm is set.\n"; while ($stop == 0) { print LOG "In while-sleep loop.\n"; sleep(2); } print LOG "The loop has been escaped.\n"; die "Done."; print LOG "No one should ever see this."; } else { my $pid = fork(); $SIG{'INT'} = sub { waitpid($pid,0) if $pid != 0; exit(0); }; # What child process should do if($pid == 0) { # Poll to see if any data is coming in print "\nListening...\n\n"; while (1) { my $incmsg = $port->lookfor(9); # If we get data, then print it if ($incmsg) { print "\nFrom arduino: " . $incmsg . "\n\n"; } } } # What parent process should do else { sleep(1); my $choice = ' '; print "Please pick an option you'd like to use:\n"; while(1) { print " [1] Cycle [2] Relay OFF [3] Relay ON [4] Config [$lastChoice]: "; chomp($choice = <STDIN>); switch ($choice) { case /1/ { $SIG{ALRM} = sub { print "Expecting plant bed to be full; please check.\n"; $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2\n"; }; $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1\n"; print "Waiting for plant bed to fill...\n"; alarm (420); $lastChoice = $choice; } case /2/ { $signalOut = $port->write('2'); # Signal to set pin 3 low print "Sent cmd: 2"; $lastChoice = $choice; } case /3/ { $signalOut = $port->write('1'); # Signal to arduino to set pin 3 High print "Sent cmd: 1"; $lastChoice = $choice; } case /4/ { print "There is no configuration available yet. Please stab the developer."; } else { print "Please select a valid option.\n\n"; } } } } }

    Read the article

  • Castle Windsor, Fluent Nhibernate, and Automapping Isession closed problem

    - by SImon
    I'm new to the whole castle Windsor, Nhibernate, Fluent and Automapping stack so excuse my ignorance here. I didn't want to post another question on this as it seems there are already a huge number of questions that try to get a solution the Windsor nhib Isession management problem, but none of them have solved my problem so far. I am still getting a ISession is closed exception when I'm trying to call to the Db from my Repositories,Here is my container setup code. container.AddFacility<FactorySupportFacility>() .Register( Component.For<ISessionFactory>() .LifeStyle.Singleton .UsingFactoryMethod(() => Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005. ConnectionString( c => c.Database("DbSchema").Server("Server").Username("UserName").Password("password"))) .Mappings ( m => m.AutoMappings.Add ( AutoMap.AssemblyOf<Message>(cfg) .Override<Client>(map => { map.HasManyToMany(x => x.SICCodes).Table("SICRefDataToClient"); }) .IgnoreBase<BaseEntity>() .Conventions.Add(DefaultCascade.SaveUpdate()) .Conventions.Add(new StringColumnLengthConvention(),new EnumConvention()) .Conventions.Add(new EnumConvention()) .Conventions.Add(DefaultLazy.Never()) ) ) .ExposeConfiguration(ConfigureValidator) .ExposeConfiguration(BuildDatabase) .BuildSessionFactory() as SessionFactoryImpl), Component.For<ISession>().LifeStyle.PerWebRequest.UsingFactoryMethod(kernel => kernel.Resolve<ISessionFactory>().OpenSession() )); In my repositories i inject private readonly ISession session; and use it as followes public User GetUser(int id) { User u; u = session.Get<User>(id); if (u != null && u.Id > 0) { NHibernateUtil.Initialize(u.UserDocuments); } return u; in my web.config inside <httpModules>. i have also added this line <add name="PerRequestLifestyle" type="Castle.MicroKernel.Lifestyle.PerWebRequestLifestyleModule, Castle.Windsor"/> I'm i still missing part of the puzzle here, i can't believe that this is such a complex thing to configure for a basic need of any web application development with nHibernate and castle Windsor. I have been trying to follow the code here windsor-nhibernate-isession-mvc and i posted my question there as they seemed to have the exact same issue but mine is not resolved.

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • Is it possible to run PHPUnit with a dynamically loaded extension library?

    - by therefromhere
    I have a suite of PHPUnit tests for my extension, and I want to run them as part of the extension's Hudson build process. So I want to run PHPUnit specifying the extension library to load at runtime, but I can't figure out how to do this. My directory structure is as follows: /myextension.c /otherextensionfiles.* /modules/myextension.so /tests/unittests.php I've tried running PHPUnit with an configuration XML file as follows: <phpunit> <php> <ini name="extension_dir" value="../modules/"/> <ini name="extension" value="myextension.so"/> </php> </phpunit> And then running it as follows (from the tests directory): phpunit --configuration config.xml unittests.php But then I get Fatal error: Call to undefined function myfunction(), so it's not loading the library. I've also tried: phpunit -d extension_dir=../modules/ -d extension=myextension.so unittests.php And also dl('myextension.so') to the test setup, but no joy. If it's relevant, this is using PHP 5.2 and PHPUnit 3.4.11.

    Read the article

  • Using MSBuild 4 command line to publish ASP.NET web application

    - by meandmycode
    In previous msbuild we used the target '_CopyWebApplication' in order to build and convert the source of a project into a published site, this worked OK, but wasn't ideal. In .NET 4, the publishing process is somewhat more sophisticated and additionally seems a bit of a black box to understand. Whilst packages look great, I cannot fully understand how they can be harnessed by a build server, the build server would not get any manifest information, and equally, something (msbuild?) is CREATING this manifest information FROM the project file. In our build server, I ideally want to say, here is my csproj file, deploy it by the package configuration 'x'. I'm trying to understand the workflow I need to make this happen. Right now when I use _CopyWebApplication, the result is different to doing a publish from visual studio 2010, primarily that web.config transforms aren't processed, and obviously msdeploy isn't involved at all. Can somebody point me in the right direction, I believe I need to get msbuild to do the equiv of 'Build Deployment Package', and then use msdeploy to deploy this from our build server to our CI testing environments. I know this is a very vague post, but I hope somebody can give me some hints, I'll be continuing research also, so if I make any progress, I'll post my findings here. Thanks in advance, Stephen.

    Read the article

  • Moving a large number of files in one directory to multiple directories

    - by Axsuul
    I am looking to create a Windows batch script to move about 2,000 files and splitting them up so that there are 10 files per folder. I have attempted creating a batch script but the syntax really boggles my mind. Here is what I have so far @echo off :: Config parameters set /a groupsize = 10 :: initial counter, everytime counter is 1, we create new folder set /a n = 1 :: folder counter set /a nf = 1 for %%f in (*.txt) do ( :: if counter is 1, create new folder if %n% == 1 ( md folder%nf% set /a n += 1 ) :: move file into folder mv -Y %%f folder%nf%\%%f :: reset counter if larger than group size if %n% == %groupsize% ( set /a n = 1 ) else ( set /a n += 1 ) ) pause Basically what this script does is loop through each .txt file in the directory. It creates a new directory in the beginning and moves 10 files into that directory, then creates a new folder again and moves another 10 files into that directory, and so on. However, I'm having problems where the n variable is not being incremented in the loop? I'm sure there's other errors too since the CMD window closes on me even with pause. Any help or guidance is appreciated, thanks for your time!

    Read the article

  • Configuring zend to use gmail smtp: Windows Apache dev-environment: "Could not open socket" error - repeatedly - going mad

    - by confused
    My dev environment is Win XP SP2 / Apache 2.something PHP 5.something_or_other My prod env is Linux Ubuntu / Apache 2.something_else PHP 5.something_or_other_else The code is all Zend Framework Version: 1.11.1 I can telnet to: smtp.gmail.com 465 from the PC. I have Mercury configured on my PC to use gmail as it's smtp host and it works just fine. (MercuryC SMTP Client). Mercury is set to use port 465 and SSL on smtp.gmail.com -- No problem. Zend mail works just fine on my production environment using the production mail server to send out mail. It's the same basic application.ini but with different values in the mail variables. On my local PC dev setup, my application.ini contains: (same values as I use in Mercury) mail.templatePath = APPLICATION_PATH "/emails" mail.sender.name = "myAccount" mail.sender.email = "[email protected]" mail.host = smtp.gmail.com mail.smtp.auth = "login" mail.smtp.username = "[email protected]" mail.smtp.password = "myPassWord" mail.smtp.ssl = "ssl" mail.smtp.port = 465 I have been doing trial and error for hours trying to get a single email out with no success. In every case, regardless of server or port settings it throws an error and reports: Could not open socket. Both Apache and Mercury Core are exceptions in my Windows Firewall config. Mercury seems to be having no problem. I have searched stackoverflow before posting this and have been googling for hours -- with no success. I am slowly losing my mind I would be very much obliged for any tip as to what might be wrong. Thanks for reading. =================== BTW When I use the SAME application.ini values on my local PC as on the production host, I get the same "Could not open socket" error. Those values are: mail.templatePath = APPLICATION_PATH "/emails" mail.sender.name = "otherUser" mail.sender.email = "[email protected]" mail.host = smtp.otherServer.com mail.smtp.auth = "login" mail.smtp.username = "[email protected]" mail.smtp.password = "otherPAssWord" mail.smtp.ssl = "ssl" mail.smtp.port = 465 I know these work in the production (Ubuntu) environment. I'm utterly baffled.

    Read the article

  • Celery tasks not works with gevent

    - by Novarg
    When i use celery + gevent for tasks that uses subprocess module i'm getting following stacktrace: Traceback (most recent call last): File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 228, in trace_task R = retval = fun(*args, **kwargs) File "/home/venv/admin/lib/python2.7/site-packages/celery/task/trace.py", line 415, in __protected_call__ return self.run(*args, **kwargs) File "/home/webapp/admin/webadmin/apps/loggingquarantine/tasks.py", line 107, in release_mail_task res = call_external_script(popen_obj.communicate) File "/home/webapp/admin/webadmin/apps/core/helpers.py", line 42, in call_external_script return func_to_call(*args, **kwargs) File "/usr/lib64/python2.7/subprocess.py", line 740, in communicate return self._communicate(input) File "/usr/lib64/python2.7/subprocess.py", line 1257, in _communicate stdout, stderr = self._communicate_with_poll(input) File "/usr/lib64/python2.7/subprocess.py", line 1287, in _communicate_with_poll poller = select.poll() AttributeError: 'module' object has no attribute 'poll' My manage.py looks following (doing monkeypatch there): #!/usr/bin/env python from gevent import monkey import sys import os if __name__ == "__main__": if not 'celery' in sys.argv: monkey.patch_all() os.environ.setdefault("DJANGO_SETTINGS_MODULE", "webadmin.settings") from django.core.management import execute_from_command_line sys.path.append(".") execute_from_command_line(sys.argv) Is there a reason why celery tasks act like it wasn't patched properly? p.s. strange thing that my local setup on Macos works fine while i getting such exceptions under Centos (all package versions are the same, init and config scripts too)

    Read the article

  • How do you configure an SDL Tridion CME extension for a subset of views?

    - by Chris Summers
    I have created a new editor for SDL Tridion which adds some new functionality to the ribbon bar. This is enabled by adding the following snippet to the editor.config <!-- ItemCommenting PowerTool --> <ext:extension assignid="ItemCommenting" name="Save and&lt;br/&gt;Comment" pageid="HomePage" groupid="ManageGroup" insertbefore="SaveCloseBtn"> <ext:command>PT_ItemCommenting</ext:command> <ext:title>Save and Comment</ext:title> <ext:issmallbutton>false</ext:issmallbutton> <ext:dependencies> <cfg:dependency>PowerTools.Commands</cfg:dependency> </ext:dependencies> <ext:apply> <ext:view name="*" /> </ext:apply> </ext:extension> This is applied to all views by using a wildcard value in the node. This has results in my new button being added to the ribbon of every view, including the main dashboard. Is there a way to add this to all views except for the dashboard? Or do I have to create something like this? <ext:apply> <ext:view name="PageView" /> <ext:view name="ComponentView" /> <ext:view name="SchemaView" /> </ext:apply> If this is the only way to achieve the result I need, is there a list of all the view names somewhere?

    Read the article

  • How can I display an ASP.NET MVC html part from one application in another

    - by Frank Sessions
    We have several asp.net MVC apps in the following setup SecurityApp (root application - handles forms auth for SSO and has a profile edit page) Application1 (virtual directory) Application2 (virtual directory) Application3 (virtual directory) so that domain.com points to SecurityApp and domain.com/Application1 etc point to their associated virtual directories. All of our Single Sign On (SSO) is working properly using forms authentication. Based on the users permissions when logging in a menu that lists their available applications and a logout link will be generated and saved in the cache - this menu displays fine whenever the user is in the SecurityApp (editing their profile) but we cannot figure out how to get the Applications in the virtual directories to display the same application menu. We have tried: 1) Using JSONP to do an request that will return the html for the menu. The ajax call returns the HTML with the html; however, because User.IsAuthenticated is false the menu comes back empty. 2) We created a user control and include it along with the dll's for the SecurityApp project and this works; however, we dont want to have to include all the dlls for the SecurityApp project in every application that we create (along with all the app settings in the web.config) We would like this to be as simple as possible to implement so that anyone creating a new app can add the menu to their application in as few steps as possible... Any ideas? To Clarify - we are using ASP.NET MVC 1.0 since these apps are in production and we do not have the okay to go to ASP.NET MVC 2.0 (unfortunately)

    Read the article

  • Visual Studio 2008 / ASP.NET 3.5 / C# -- issues with intellisense, references, and builds

    - by goober
    Hey all, Hoping you can help me -- the strangest thing seems to have happened with my VS install. System config: Windows 7 Pro x64, Visual Studio 2008 SP1, C#, ASP.NET 3.5. I have two web site projects in a solution. I am referencing NUnit / NHibernate (did this by right-clicking on the project and selecting "Add Reference". I've done this for several projects in the past). Things were working fine but recently stopped working and I can't figure out why. Intellisense completely disappears for any files in my App_Code directory, and none of the references are recognized (they are recognized by any file in the root directory of the web site project. Additionally, pretty simple commands like the following (in Page_Load) fail (assume TextBox1 is definitely an element on the page): if (Page.IsPostBack) { str test1; test1 = TextBox1.Text; } It says that all the page elements are null or that it can't access them. At first I thought it was me, but due to the combination of issues, it seems to be Visual Studio itself. I've tried clearing the temp directories & rebuilding the solution. I've also tried tools -- options -- text editor settings to ensure intellisense is turned on. I'd appreciate any help you can give! Thanks, Sean

    Read the article

  • ZF2: How to get Zend\Navigation inside custom router?

    - by Katan87
    I have custom router and I have to get access to Zend\Navigation inside this custom router. I was googling, asking and searching and no results :/ All I need is to find nodes with 'link' param using Zend\Navigation in my AliasSegment::match function. Here is my module.config.php: 'navigation' => array( // The DefaultNavigationFactory we configured in (1) uses 'default' as the sitemap key 'default' => array( // And finally, here is where we define our page hierarchy 'account' => array( 'label' => 'Account', 'route' => 'node', 'pages' => array( 'home' => array( 'label' => 'Dashboard', 'route' => 'node', 'params' => array( 'id' => '1', 'link' => '/about/gallery' ), ), 'login' => array( 'label' => 'Sign In', 'route' => 'node', 'params' => array( 'id' => '1', 'link' => '/signin' ), ), 'logout' => array( 'label' => 'Sign Out', 'route' => 'node', ), ), ), ), ), [...] 'service_manager' => array( 'factories' => array( 'translator' => 'Zend\I18n\Translator\TranslatorServiceFactory', 'Navigation' => 'Zend\Navigation\Service\DefaultNavigationFactory', ), ), [...] And here is my AliasSegment class: namespace Application\Controller; use Traversable; use Zend\Mvc\Router\Exception; use Zend\Stdlib\ArrayUtils; use Zend\Stdlib\RequestInterface as Request; use Zend\Mvc\Router\Http; class AliasSegment extends \Zend\Mvc\Router\Http\Segment { public function match(Request $request, $pathOffset = null) { //Here i need to have access to Zend\Navigation return parent::match($request, $pathOffset); } }

    Read the article

  • Apache attack on compromised server, iframe injected by string replace

    - by Quang-Tuan Luong
    My server has been compromised recently. This morning, I have discovered that the intruder is injecting an iframe into each of my HTML pages. After testing, I have found out that the way he does that is by getting Apache (?) to replace every instance of <body> by <iframe link to malware></iframe></body> For example if I browse a file residing on the server consisting of: </body> </body> Then my browser sees a file consisting of: <iframe link to malware></iframe></body> <iframe link to malware></iframe></body> I have immediately stopped Apache to protect my visitors, but so far I have not been able to find what the intruder has changed on the server to perform the attack. I presume he has modified an Apache config file, but I have no idea which one. In particular, I have looked for recently modified files by time-stamp, but did not find anything noteworthy. Thanks for any help. Tuan. PS: I am in the process of rebuilding a new server from scratch, but in the while, I would like to keep the old one running, since this is a business site.

    Read the article

  • ruby on rails delajed_job failing with rvm

    - by mottalrd
    I have delayed_job installed and I start the daemon to run the jobs with this ruby script require 'rubygems' require 'daemon_spawn' $: << '.' RAILS_ROOT = File.expand_path(File.join(File.dirname(__FILE__), '..')) class DelayedJobWorker < DaemonSpawn::Base def start(args) ENV['RAILS_ENV'] ||= args.first || 'development' Dir.chdir RAILS_ROOT require File.join('config', 'environment') Delayed::Worker.new.start end def stop system("kill `cat #{RAILS_ROOT}/tmp/pids/delayed_job.pid`") end end DelayedJobWorker.spawn!(:log_file => File.join(RAILS_ROOT, "log", "delayed_job.log"), :pid_file => File.join(RAILS_ROOT, 'tmp', 'pids', 'delayed_job.pid'), :sync_log => true, :working_dir => RAILS_ROOT) If I run the command with rvmsudo it works perfectly If I simply use the ruby command without rvm it fails and this is the output, but I have no idea why this happens. Could you give me some clue? Thank you user@mysystem:~/redeal.it/application$ ruby script/delayed_job start production /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:16:in `kill': Operation not permitted (Errno::EPERM) from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:16:in `alive?' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:125:in `alive?' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `block in start' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `select' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:176:in `start' from /usr/local/rvm/gems/ruby-1.9.3-p194/gems/daemon-spawn-0.4.2/lib/daemon_spawn.rb:165:in `spawn!' from script/delayed_job:37:in `<main>'

    Read the article

  • log4net: log information into different log files

    - by Daoming Yang
    I have the following configurations in my web.config file, but how can I log the information into data.txt and general.txt separately in C#? Could anyone provide some sample code for me? <appender name="GeneralLog" type="log4net.Appender.RollingFileAppender"> <file value="App_Data/Logs/general.txt" /> <appendToFile value="true" /> <maximumFileSize value="2MB" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="5" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss.fff} [%t] %-5p %c - %m%n" /> </layout> </appender> <appender name="DataLog" type="log4net.Appender.RollingFileAppender"> <file value="App_Data/Logs/data.txt" /> <appendToFile value="true" /> <maximumFileSize value="2MB" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="5" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%d{HH:mm:ss.fff} [%t] %-5p %c - %m%n" /> </layout> </appender>

    Read the article

  • ASP.NET MVC 2.0 + Implementation of a IRouteHandler does not fire

    - by Peter
    Can anybody please help me with this as I have no idea why public IHttpHandler GetHttpHandler(RequestContext requestContext) is not executing. In my Global.asax.cs I have public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); routes.Add("ImageRoutes", new Route("Images/{filename}", new CustomRouteHandler())); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } //CustomRouteHandler implementation is below public class CustomRouteHandler : IRouteHandler { public IHttpHandler GetHttpHandler(RequestContext requestContext) { // IF I SET A BREAK POINT HERE IT DOES NOT HIT FOR SOME REASON. string filename = requestContext.RouteData.Values["filename"] as string; if (string.IsNullOrEmpty(filename)) { // return a 404 HttpHandler here } else { requestContext.HttpContext.Response.Clear(); requestContext.HttpContext.Response.ContentType = GetContentType(requestContext.HttpContext.Request.Url.ToString()); // find physical path to image here. string filepath = requestContext.HttpContext.Server.MapPath("~/logo.jpg"); requestContext.HttpContext.Response.WriteFile(filepath); requestContext.HttpContext.Response.End(); } return null; } } Can any body tell me what I'm missing here. Simply public IHttpHandler GetHttpHandler(RequestContext requestContext) does not fire. I havn't change anything in the web.config either. What I'm missing here? Please help.

    Read the article

  • PHP $_SERVER['HTTP_HOST'] vs. $_SERVER['SERVER_NAME'], am I understanding the man pages correctly?

    - by Jeff
    I did a lot of searching and also read the PHP $_SERVER man page. Do I have this right regarding which to use for my PHP scripts for simple link definitions used throughout my site? $_SERVER['SERVER_NAME'] is based on your web servers' config file (Apache2 in my case), and varies depending on a few directives: (1) VirtualHost, (2) ServerName, (3) UseCanonicalName, etc. $_SERVER['HTTP_HOST'] is based on the request from the client. Therefore, it would seem to me that the proper one to use in order to make my scripts as compatible as possible would be $_SERVER['HTTP_HOST']. Is this assumption correct? Followup comments: I guess I got a little paranoid after reading this article and noting that someone said "they wouldn't trust any of the $_SERVER vars": http://markjaquith.wordpress.com/2009/09/21/php-server-vars-not-safe-in-forms-or-links/ and also: http://www.php.net/manual/en/reserved.variables.server.php (comment: Vladimir Kornea 14-Mar-2009 01:06) Apparently the discussion is mainly about $_SERVER['PHP_SELF'] and why you shouldn't use it in the form action attribute without proper escaping to prevent XSS attacks. My conclusion about my original question above is that it is "safe" to use $_SERVER['HTTP_HOST'] for all links on a site without having to worry about XSS attacks, even when used in forms. Please correct me if I'm wrong.

    Read the article

  • Caching and accessing configuration data in ASP.NET MVC app.

    - by Sosh
    I'm about to take a look at how to implement internationalisation for an ASP.NET MVC project. I'm looking at how to allow the user to change languages. My initial though is a dropdownlist containing each of the supported langauages. Whoever a few questions have come to mind: How to store the list of supported languages? (e.g. just "en", "English"; "fr", "French" etc.) An xml file? .config files? If I store this in a file I'll have to cache this (at startup I guess). So, what would be best, load the xml data into a list (somehow) and store this list in the System.Web.Cache? Application State? How then to load this data into the view (for display in a dropdown)? Give the view direct access to the cache? Just want to make sure I'm going in the right direction here... Thank you.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >