Search Results

Search found 13318 results on 533 pages for 'svn config'.

Page 241/533 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • Can I use a property placeholder with Spring EL?

    - by David Easley
    Before upgrading to Spring 3 I had this in my applicationContext.xml file: <bean class="com.northgateis.pole.ws.PolePayloadValidatingInterceptor"> <property name="validateRequest" value="${validateRequest}" /> <property name="validateResponse" value="${validateResponse}" /> </bean> where ${validateRequest) and ${validateRequest) refer to properties that may or may not be defined in my properties file. In Spring 2, if these proeprties were not present in the properties file the setters on the bean were not called and so the defaults hard-coded in PolePayloadValidatingInterceptor were used. After upgrading to Spring 3, it seems the behaviour is different: If the properties are not present in the properties file I get the following exception: SEVERE: Exception sending context initialized event to listener instance of class org.springframework.web.context.ContextLoaderListener org.springframework.beans.factory.BeanDefinitionStoreException: Invalid bean definition with name 'annotationMapping' defined in class path resource [com/northgateis/pole/ws/applicationContext-ws.xml]: Could not resolve placeholder 'validateRequest' at org.springframework.beans.factory.config.PropertyPlaceholderConfigurer.processProperties(PropertyPlaceholderConfigurer.java:272) at org.springframework.beans.factory.config.PropertyResourceConfigurer.postProcessBeanFactory(PropertyResourceConfigurer.java:75) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:640) at org.springframework.context.support.AbstractApplicationContext.invokeBeanFactoryPostProcessors(AbstractApplicationContext.java:615) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:405) at org.springframework.web.context.ContextLoader.createWebApplicationContext(ContextLoader.java:272) I tried dabbling with Spring EL but the following doesn't seem to work: <bean class="com.northgateis.pole.ws.PolePayloadValidatingInterceptor"> <property name="validateRequest" value="${validateRequest?:true}" /> <property name="validateResponse" value="${validateResponse?:false}" /> </bean> The value after the Elvis operator is always used, even when the properties are defined in the proeprties file. Interesting that the syntax is accepted. Any suggestions?

    Read the article

  • Zend Sessions problem with IE8

    - by Emil
    I'm running a Zend Framework powered website and it seems to have serious problems with sessions. I have a 5 step process where I save the form data in the session between the steps and then save it into the database on the last step. When we built the site sometimes the session just went away and forced us to restart. Now it seems to work again but recently we discovered an issue with Internet Explorer 8. It fails between step 2 - 3 and forgets the session. It works fine in IE6, IE7, FF, Chrome, Safari and even in my mobile web browser (SE P1). We're storing our sessions in the database and if I deactivate the session db handler it works. What's the difference between using the database and not using it for sessions? Do I loose something if I switch back? Bootstrap: /* Start session */ $saveHandler = new Zend_Session_SaveHandler_DbTable(array( 'name' => 'sessions', 'primary' => 'id', 'modifiedColumn' => 'modified', 'dataColumn' => 'data', 'lifetimeColumn' => 'lifetime' )); Zend_Session::rememberMe((int) $config->session->lifetime); $saveHandler->setLifetime((int) $config->session->lifetime) ->setOverrideLifetime(true); Zend_Session::setSaveHandler($saveHandler); Zend_Session::start(); and in my step controller $session = new Zend_Session_Namespace('wizard'); Then I'm just working with $session saving data in a stdClass in $session.

    Read the article

  • Does Security Trimming work with Web Forms Routing?

    - by Slauma
    In my web.config I have configured a SiteMapProvider with securityTrimmingEnabled="true" and on my main master page is an asp:Menu control bound to an asp:SiteMapDataSource. In addition I have configured restricted access to all pages in a subfolder "Admin" (using another web.config in this subfolder). If I put a sitemapNode in Web.sitemap... <siteMapNode url="~/Admin/Default.aspx" title="Administration" description="" > ... only users in role "Admin" will have the menu item related to that siteMapNode. So this is working fine and as intended. Now I have defined a URL route in Global.asax to map the physical file to a new URL: System.Web.Routing.RouteTable.Routes.MapPageRoute("AdminHomeRoute", "Administration/Home", "~/Admin/Default.aspx"); But when I use this route-URL in the SiteMap file... <siteMapNode url="Administration/Home" title="Administration" description="" > ... it seems that security trimming does not work: The menu item is visible for all users. (Access to the page is still restricted though, so selecting the menu item by non-Admin users does not navigate to the restricted page.) Question: Is there any setting I've missed so far to make security trimming working with URL routing in ASP.NET 4.0 Web Forms? Did I do something wrong? Is there any work-around? Thank you for help!

    Read the article

  • Hazelcast Distributed Executor Service KeyOwner

    - by János Veres
    I have problem understanding the concept of Hazelcast Distributed Execution. It is said to be able to perform the execution on the owner instance of a specific key. From Documentation: <T> Future<T> submitToKeyOwner(Callable<T> task, Object key) Submits task to owner of the specified key and returns a Future representing that task. Parameters: task - task key - key Returns: a Future representing pending completion of the task I believe that I'm not alone to have a cluster built with multiple maps which might actually use the same key for different purposes, holding different objects (e.g. something along the following setup): IMap<String, ObjectTypeA> firstMap = HazelcastInstance.getMap("firstMap"); IMap<String, ObjectTypeA_AppendixClass> secondMap = HazelcastInstance.getMap("secondMap"); To me it seems quite confusing what documentation says about the owner of a key. My real frustration is that I don't know WHICH - in which map - key does it refer to? The documentation also gives a "demo" of this approach: import com.hazelcast.core.Member; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.IExecutorService; import java.util.concurrent.Callable; import java.util.concurrent.Future; import java.util.Set; import com.hazelcast.config.Config; public void echoOnTheMemberOwningTheKey(String input, Object key) throws Exception { Callable<String> task = new Echo(input); HazelcastInstance hz = Hazelcast.newHazelcastInstance(); IExecutorService executorService = hz.getExecutorService("default"); Future<String> future = executorService.submitToKeyOwner(task, key); String echoResult = future.get(); } Here's a link to the documentation site: Hazelcast MultiHTML Documentation 3.0 - Distributed Execution Did any of you guys figure out in the past what key does it want?

    Read the article

  • Apache CXF REST Services w/ Spring AOP

    - by jconlin
    I'm trying to get Apache CXF JAX-RS services working with Spring AOP. I've created a simple logging class: public class AOPLogger{ public void logBefore(){ System.out.println("Logging Before!"); } } My Spring configuration (beans.xml): <aop:config> <aop:aspect id="aopLogger" ref="test.aop.AOPLogger"> <aop:before method="logBefore" pointcut="execution(* test.rest.RestService(..))"/> </aop:aspect> </aop:config> <bean id="aopLogger" class="test.aop.AOPLogger"/> I always get an NPE in RestService when a call is made to a Method getServletRequest(), which has: return messageContext.getHttpServletRequest(); If I remove the aop configuration or comment it out from my beans.xml, everything works fine. All of my actual Rest services extend test.rest.RestService (which is a class) and call getServletRequest(). I'm just trying to just get AOP up and running based off of the example in the CXF JAX-RS documentation. Does anyone have any idea what I'm doing wrong? Thanks!

    Read the article

  • Java - encrypt / decrypt user name and password from a configuration file

    - by nzpcmad
    We are busy developing a Java web service for a client. There are two possible choices: Store the encrypted user name / password on the web service client. Read from a config. file on the client side, decrypt and send. Store the encrypted user name / password on the web server. Read from a config. file on the web server, decrypt and use in the web service. The user name / password is used by the web service to access a third-party application. The client already has classes that provide this functionality but this approach involves sending the user name / password in the clear (albeit within the intranet). They would prefer storing the info. within the web service but don't really want to pay for something they already have. (Security is not a big consideration because it's only within their intranet). So we need something quick and easy in Java. Any recommendations? The server is Tomkat 5.5. The web service is Axis2. What encrypt / decrypt package should we use? What about a key store? What configuration mechanism should we use? Will this be easy to deploy?

    Read the article

  • Session expiry times?

    - by user246114
    Hi, I've enabled sessions on my app: // appengine-web.xml <sessions-enabled>true</sessions-enabled> they seem to work when I load different pages under my domain. If I close the browser however, looks like the session is terminated. Restarting the browser shows the last session is no longer available. That could be fine, just wondering if this is documented anywhere, so I can rely on this fact? I tried the following just to test if we can tweak it: // in web.xml <session-config> <session-timeout>10</session-timeout> </session-config> also // in my servlet getThreadLocalRequest().getSession().setMaxInactiveInterval(60 * 5); but same behavior, session data is no longer available after browser restart. I looked at the stats for my project and I see data being used for something like "_ah_SESSION" objects. Are those the sessions from above? If so, shouldn't they be cleaned since they're no longer valid? (Hopefully gae takes care of that automatically?) Thanks

    Read the article

  • Git : Failed at pushing to remote server, ' REPOSITORY_PATH ' is not a git command

    - by Judarkness
    I'm using Git with TortoiseGit on Windows XP, and I have a remote bare repository on Windows Vista 64bit version. When I tried to push my local files to remote bare repository, I got the following error message. git.exe push "origin" master:master git: 'C:/Git_Repository/.git' is not a git command. See 'git --help'. fatal: The remote end hung up unexpectedly the arbitrary URL is : username@serverip:C:/Git_Repository/.git The same arbitrary URL worked just fine while doing clone/fetch/pull. Access from a local directory in remote machine to this bare repository has no problem either so I belive there is something wrong with my path. I can push/pull at GitHub correctly but I was using URL provide by GitHub. Does anyone know what's wrong with my configuration? Here is my remote .git/config [core] repositoryformatversion = 0 filemode = false bare = true logallrefupdates = true ignorecase = true hideDotFiles = dotGitOnly Here is my local .git/config [core] repositoryformatversion = 0 filemode = false bare = false logallrefupdates = true symlinks = false ignorecase = true hideDotFiles = dotGitOnly [remote "origin"] fetch = +refs/heads url = username@serverip:C:/Git_Repository/.git [branch "master"] remote = origin merge = refs/heads/master Thanks for the reminding

    Read the article

  • PHP Treat String as Variable

    - by Ygam
    Hi guys. I have a piece of code here that does not work despite me using a $$ on it to treat the string as a variable: <? foreach (KOHANA::config('list.amenities_forms') as $k => $v) : ?> <div class="form"> <fieldset> <legend><?php echo $v ?></legend> <input type="checkbox" name="<?=$k?>flag" id="<?=$k?>flag"/> <label class="inline"><?=$v?></label> <label>Description</label> <textarea cols="50" rows="5" name="<?=$k?>[]"><?= empty($$k[0]) ? '' : $$k[0]?></textarea> <label>Size</label> <input type="text" name="<?=$k?>[]" value="<?= empty($$k[1]) ? '' : $$k[1]?>"/> <label>Capacity</label> <input type="text" name="<?=$k?>[]" value="<?= empty($$k[2]) ? '' : $$k[2]?>"/> </fieldset> </div> <? endforeach?> the function Kohana::config returns this array: 'amenities_forms' => array( 'meeting_space' => 'Meeting Space', 'breakfast_room' => 'Breakfast Room', 'living_quarters' => 'Living Quarters', 'restaurant' => 'Restaurant', 'bar' => 'Bar' ) what could I be doing wrong?

    Read the article

  • NAnt errors when generating assembly info after project is upgraded to VS2010

    - by Grant Palin
    I have a project I recently upgraded to VS2010 - the project/solution files are updated, but I'm still targeting .NET 3.5. Until now, my standard NAnt build script has not given me any trouble. However, it appears that after updating the project, and updating the NAnt config to be aware of the new tooling, I am now receiving an error when autogenerating assembly information, which fails the build. The relevant build task is below: <asminfo output="${dir.src}\${file.commonAssemblyInfo}" language="${project.codeLanguage}"> <imports> <import namespace="System.Reflection" /> </imports> <attributes> <attribute type="AssemblyVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyFileVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyInformationalVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyCopyrightAttribute" value="${assembly.copyright}" /> <attribute type="AssemblyCompanyAttribute" value="${assembly.company}" /> <attribute type="AssemblyConfigurationAttribute" value="${project.config}" /> <attribute type="AssemblyTrademarkAttribute" value="${assembly.trademark}" /> <attribute type="AssemblyProductAttribute" value="${assembly.product}" /> </attributes> </asminfo> The error is highlighted for the first line of the asminfo task. It reads: AssemblyInfo file 'C:\Users\Grant\Projects\VisualStudio\Checklist\src\CommonAssemblyInfo.cs' could not be generated. This method implicitly uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkID=155570 for more information. I've gathered so far that this is something new to .NET 4. Has anyone had to address this error before? Does anyone know what it is about asminfo that may be triggering the error?

    Read the article

  • How to set permissions or alter a git commit process when using local repositories

    - by Tony
    I have a server that contains a central git repository and one of my co-worker's development environment. My co-worker's repository's origin is the central git repository and he pushes there when he has some code to share. Likewise, I develop locally and push to the central git repository when I have some code to share, so my repository's origin is also the central git repository. The issue is that I have the central git repository under a "git" user's home directory. So when I push I am actually SSH'ing into the the server as the "git" user. To be even more clear, my config has these lines: $ more .git/config [remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = [email protected]:fsg [branch "master"] remote = origin merge = refs/heads/master When I push, git handles this SSH + push seamlessly with I am guessing some sort of git shell. The issue is that when my coworker pushes, he is logged in as himself for a user and gets a bunch of crazy permission errors. Is there a typical way to solve this problem without opening up git's directories to a group? I think this will be problematic when I push and therefore overwrite the the repository and those permissions are reset. Thanks!

    Read the article

  • modx friendly urls nginx FPM php5.3 - friendly url's not working

    - by okdan
    Hi, Im using php5.3 on nginx 0.8.53 with FPM on Modx revolution. Im trying to get "friendly url's" to work, but all I get is 404's. In modx config, friendly url's is set to yes, friendly aliases is set to no (so it drops the suffix) My config file: server { listen 80; server_name .mydomain.net; # index index.php; root /home/mylogin/htdocs; location / { index index.php index.html; if (!-e $request_filename) { rewrite ^/(.*)$ /index.php?q=$1 last; } } # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico)$ { root /home/mylogin/htdocs; access_log off; expires 30d; break; } } Fast CGI modx file: fastcgi_connect_timeout 60; fastcgi_send_timeout 300; fastcgi_buffers 4 32k; fastcgi_busy_buffers_size 32k; fastcgi_temp_file_write_size 32k; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_ignore_client_abort on; fastcgi_intercept_errors on; fastcgi_read_timeout 300; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200;

    Read the article

  • Hiding a button on pushed view and showing it when back to list view

    - by torr
    When I load my list view it has several blog posts and a refresh button on the top left. If I tap on a list item a view is pushed with the contents of that specific post. When this view is pushed in, the refresh button is hidden. But when I tap 'Back' to the parent list view, I'd like the refresh button to show (un-hide) - but it remains hidden. Any idea how to make this work? This is my View: Ext.require(['Ext.data.Store', 'MyApp.model.StreamModel'], function() { Ext.define('MyApp.view.HomeView', { extend: 'Ext.navigation.View', xtype: 'homepanel', requires: [ 'Ext.dataview.List', ], config: { title: 'Home', iconCls: 'home', styleHtmlContent: true, navigationBar: { items: [ { xtype: 'button', iconMask: true, iconCls: 'refresh', align: 'left', action: 'refreshButton', id: 'refreshButtonId' } ] }, items: { title: 'My', xtype: 'list', itemTpl: [ '<div class="post">', ... '</div>' ].join(''), store: new Ext.data.Store({ model: 'MyApp.model.StreamModel', autoLoad: true, storeId: 'stream' }), } } }); }); and my Controller: Ext.define('MyApp.controller.SingleController', { extend: 'Ext.app.Controller', config: { refs: { stream: 'homepanel' }, control: { 'homepanel list': { itemtap: 'showPost' } } }, showPost: function(list, index, element, record) { this.getStream().push({ xtype: 'panel', html: [ '<div class="post">', '</div>' ].join(''), scrollable: 'vertical', styleHtmlContent: true, }); Ext.getCmp('refreshButtonId').hide(); } });

    Read the article

  • Mono doesn't write settings defaults

    - by Petar Minchev
    Hi guys! Here is my problem. If I use only one Windows Forms project and call only - Settings.Default.Save() when running it, Mono creates a user.config file with the default value for each setting. It is fine, so far so good. But now I add a class library project, which is referenced from the Windows Forms project and I move the settings from the Windows Forms project to the Class Library one. Now I do the same - Settings.Default.Save() and to my great surprise, Mono creates a user.config file with EMPTY values(NOT the default ones) for each setting?! What's the difference between having the settings in the Windows Forms Project or in the class library one? And by the way it is not a operating system issue. It is a Mono issue, because it doesn't work both under Windows and Linux. If I don't use Mono everything is fine, but I have to port my application to Linux, so I have to use Mono. I am really frustrated, it is blocking a project:( Thanks in advance for any suggestion you have. Regards, Petar

    Read the article

  • how to set SqlMapClient outside of spring xmls

    - by Omnipresent
    I have the following in my xml configurations. I would like to convert these to my code because I am doing some unit/integration testing outside of the container. xmls: <bean id="MyMapClient" class="org.springframework.orm.ibatis.SqlMapClientFactoryBean"> <property name="configLocation" value="classpath:sql-map-config-oracle.xml"/> <property name="dataSource" ref="IbatisDataSourceOracle"/> </bean> <bean id="IbatisDataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/my/mydb"/> </bean> code I used to fetch stuff from above xmls: this.setSqlMapClient((SqlMapClient)ApplicationInitializer.getApplicationContext().getBean("MyMapClient")); my code (for unit testing purposes): SqlMapClientFactoryBean bean = new SqlMapClientFactoryBean(); UrlResource urlrc = new UrlResource("file:/data/config.xml"); bean.setConfigLocation(urlrc); DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName("oracle.jdbc.OracleDriver"); dataSource.setUrl("jdbc:oracle:thin:@123.210.85.56:1522:ORCL"); dataSource.setUsername("dbo_mine"); dataSource.setPassword("dbo_mypwd"); bean.setDataSource(dataSource); SqlMapClient sql = (SqlMapClient) bean; //code fails here when the xml's are used then SqlMapClient is the class that sets up then how come I cant convert SqlMapClientFactoryBean to SqlMapClient

    Read the article

  • CallbackValidator called with empty string

    - by Paolo Tedesco
    I am writing a custom configuration section, and I would like to validate a configuration property with a callback, like in this example: using System; using System.Configuration; class CustomSection : ConfigurationSection { [ConfigurationProperty("stringValue", IsRequired = false)] [CallbackValidator(Type = typeof(CustomSection), CallbackMethodName = "ValidateString")] public string StringValue { get { return (string)this["stringValue"]; } set { this["stringValue"] = value; } } public static void ValidateString(object value) { if (string.IsNullOrEmpty((string)value)) { throw new ArgumentException("string must not be empty."); } } } class Program { static void Main(string[] args) { CustomSection cfg = (CustomSection)ConfigurationManager.GetSection("customSection"); Console.WriteLine(cfg.StringValue); } } And my App.config file looks like this: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <section name="customSection" type="CustomSection, config-section"/> </configSections> <customSection stringValue="lorem ipsum"/> </configuration> My problem is that when the ValidateString function is called, the value parameter is always an empty string, and therefore the validation fails. If i just remove the validator, the string value is correctly initialized to the value in the configuration file. What am I missing?

    Read the article

  • C# Windows Service doesn't seem to like privatePath

    - by SauerC
    I wrote a C# Windows Service to handle task scheduling for our application. I'm trying to move the "business rules" assemblies into a bin subdirectory of the scheduling application to make it easier for us to do updates (stop service, delete all files in bin folder, replace with new ones, start service). I added <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin;"/> </assemblyBinding> </runtime> to the service's app config and it works fine if the service is run as a console application. Problem is when the service is run as a windows service it doesn't work. It appears that when windows runs the service the app config file gets read properly but then the service is executed as if it was in c:\windows\system32 and not the actually EXE location and that gums up the works. We've got a lot of assemblies so I really don't want to use the GAC or <codeBase>. Is it possible to have the EXE change it's base directory back to where it should be when it's run as a service?

    Read the article

  • Can't install do_mysql gem?

    - by maccy1
    I'm trying to install the do_mysql on my Snow Leopord system Macbook Pro 13", but I keep getting this error: n216-160:~ myself$ sudo gem1.9 install do_mysql Password: Building native extensions. This could take a while... ERROR: Error installing do_mysql: ERROR: Failed to build gem native extension. /opt/local/bin/ruby1.9 extconf.rb checking for mysql_query() in -lmysqlclient... no *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/opt/local/bin/ruby1.9 --with-mysql-config --without-mysql-config --with-mysql-dir --without-mysql-dir --with-mysql-include --without-mysql-include=${mysql-dir}/include --with-mysql-lib --without-mysql-lib=${mysql-dir}/lib --with-mysqlclientlib --without-mysqlclientlib Gem files will remain installed in /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0 for inspection. Results logged to /opt/local/lib/ruby1.9/gems/1.9.1/gems/do_mysql-0.10.0/ext/do_mysql_ext/gem_make.out n216-160:~ myself$ I have no idea why. I also reinstalled my verison of MySQL with the MySQL 5.4.3 beta, 64-bit as others suggested but no dice. Does anyone have any idea what is wrong?

    Read the article

  • Rack middleware deadlock

    - by Joel
    I include this simple Rack Middleware in a Rails application: class Hello def initialize(app) @app = app end def call(env) [200, {"Content-Type" => "text/html"}, "Hello"] end end Plug it in inside environment.rb: ... Dir.glob("#{RAILS_ROOT}/lib/rack_middleware/*.rb").each do |file| require file end Rails::Initializer.run do |config| config.middleware.use Hello ... I'm using Rails 2.3.5, Webrick 1.3.1, ruby 1.8.7 When the application is started in production mode, everything works as expected - every request is intercepted by the Hello middleware, and "Hello" is returned. However, when run in development mode, the very first request works returning "Hello", but the next request hangs. Interrupting webrick while it is in the hung state yields this: ^C[2010-03-24 14:31:39] INFO going to shutdown ... deadlock 0xb6efbbc0: sleep:- - /usr/lib/ruby/gems/1.8/gems/actionpack-2.3.5/lib/action_controller/reloader.rb:31 deadlock 0xb7d1b1b0: sleep:J(0xb6efbbc0) (main) - /usr/lib/ruby/1.8/webrick/server.rb:113 Exiting /usr/lib/ruby/1.8/webrick/server.rb:113:in `join': Thread(0xb7d1b1b0): deadlock (fatal) from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `each' from /usr/lib/ruby/1.8/webrick/server.rb:113:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' from /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' from /usr/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/handler/webrick.rb:14:in `run' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/commands/server.rb:111 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from script/server:3 Something to do with the class reloader in development mode. There is also mention of deadlock in the exception. Any ideas what might be causing this? Any recommendations as to the best approach to debug this?

    Read the article

  • How to provide js-ctypes in a spidermonkey embedding?

    - by Triston J. Taylor
    Summary I have looked over the code the SpiderMonkey 'shell' application uses to create the ctypes JavaScript object, but I'm a less-than novice C programmer. Due to the varying levels of insanity emitted by modern build systems, I can't seem to track down the code or command that actually links a program with the desired functionality. method.madness This js-ctypes implementation by The Mozilla Devs is an awesome addition. Since its conception, scripting has been primarily used to exert control over more rigorous and robust applications. The advent of js-ctypes to the SpiderMonkey project, enables JavaScript to stand up and be counted as a full fledged object oriented rapid application development language flying high above 'the bar' set by various venerable application development languages such as Microsoft's VB6. Shall we begin? I built SpiderMonkey with this config: ./configure --enable-ctypes --with-system-nspr followed by successful execution of: make && make install The js shell works fine and a global ctypes javascript object was verified operational in that shell. Working with code taken from the first source listing at How to embed the JavaScript Engine -MDN, I made an attempt to instantiate the JavaScript ctypes object by inserting the following code at line 66: /* Populate the global object with the ctypes object. */ if (!JS_InitCTypesClass(cx, global)) return NULL; /* I compiled with: g++ $(./js-config --cflags --libs) hello.cpp -o hello It compiles with a few warnings: hello.cpp: In function ‘int main(int, const char**)’: hello.cpp:69:16: warning: converting to non-pointer type ‘int’ from NULL [-Wconversion-null] hello.cpp:80:20: warning: deprecated conversion from string constant to ‘char*’ [-Wwrite-strings] hello.cpp:89:17: warning: NULL used in arithmetic [-Wpointer-arith] But when you run the application: ./hello: symbol lookup error: ./hello: undefined symbol: JS_InitCTypesClass Moreover JS_InitCTypesClass is declared extern in 'dist/include/jsapi.h', but the function resides in 'ctypes/CTypes.cpp' which includes its own header 'CTypes.h' and is compiled at some point by some command during 'make' to yeild './CTypes.o'. As I stated earlier, I am less than a novice with the C code, and I really have no idea what to do here. Please give or give direction to a generic example of making the js-ctypes object functional in an embedding.

    Read the article

  • .NET assembly not loading from NTVDM

    - by John Reid
    I have a VDD dll that's loaded by a DOS program running inside the NTVDM. This dll uses C++/CLI and references a .NET assembly. All in all, the loading process is something like this: NTVDM runs: prntsr.com which uses VDD RegisterModule to load: prnvdd.dll which references .NET assembly: prnlib.dll The prntsr.com, prnvdd.dll and prnlib.dll files are all in the same folder. However, when loading it, I get the following exception: Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf23cee305e91b7' or one of its dependencies. The system cannot find the file specified. File name: 'PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf23cee305e91b7' at VDD_Initialise() === Pre-bind state information === LOG: User = DOMAIN\user LOG: DisplayName = PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf2 3cee305e91b7 (Fully-specified) LOG: Appbase = file:///C:/WINDOWS/system32/ LOG: Initial PrivatePath = NULL Calling assembly : (Unknown). === LOG: This bind starts in default load context. LOG: No application configuration file found. LOG: Using machine configuration file from C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\config\machine.config. LOG: Post-policy reference: PRNLib, Version=1.0.0.0, Culture=neutral, PublicKeyToken=ecf23cee305e91b7 LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib.DLL. LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib/PRNLib.DLL. LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib.EXE. LOG: Attempting download of new URL file:///C:/WINDOWS/system32/PRNLib/PRNLib.EXE. It only searches C:\WINDOWS\system32\ for the assembly, which I guess this is due to NTVDM.EXE - as this is the actual process that the assembly is being loaded into, it takes its location as the AppBase. Any ideas how to change the AppBase or otherwise work around this problem?

    Read the article

  • Class 'org.springframework.http.converter.ResourceHttpMessageConverter' not found - how to correct?

    - by Ash Kim
    Eclipse STS is reporting I have problem with my spring project. It's a fresh project generated from the Spring Web MVC Project Template (File-New-Spring Template Project-Spring Web MVC ). When I create the project it has no problems - it's only once I modify the pom (by adding the hibernate dependencies) that STS then picks up the spring problem. Strangely if I revert the pom the problem remains. Also I can run the project on a spring tc server and all works correctly. Any ideas how I can satisfy this problem report? "Class 'org.springframework.http.converter.ResourceHttpMessageConverter' not found" mvc-config.xml /src/main/webapp/WEB-INF/spring line 9 Spring Beans Problem mvc-config.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd"> <mvc:annotation-driven /> <!-- <= PROBLEMATIC LINE REPORTED BY STS --> <!-- Resolves view names to protected .jsp resources within the /WEB-INF/views directory --> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/views/"/> <property name="suffix" value=".jsp"/> </bean> </beans>

    Read the article

  • Having an issue with org.hibernate.SessionException: Session is closed! in Hibernate

    - by hal10001
    I've done quite a bit a research on this with no luck, but all the answers have a tendency to point toward the session context settings in the config file. What is odd is that I get a session connection the very first time I hit the page (and therefore, a successful result set), but then when I reload I get the following exception: org.hibernate.SessionException: Session is closed! Here are my config settings that are not DB connection string related: <property name="hibernate.show_sql">false</property> <property name="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</property> <property name="hibernate.current_session_context_class">thread</property> <property name="hibernate.cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <property name="hibernate.cache.use_query_cache">false</property> <property name="hibernate.cache.use_minimal_puts">false</property> Here is an example of a call I make that produces the situation I described above. public T get(int id) { session.beginTransaction(); T type; try { type = getTypeClass().cast(session.get(getTypeClass(), id)); } catch (ClassCastException classCastException) { throw new ClassCastException(classCastException.getMessage()); } session.getTransaction().commit(); return type; } The session variable reference is to a static field that contains the current session. All of the session connection details are textbook reference manual. For example, here is my Hibernate session utility: import org.hibernate.SessionFactory; import org.hibernate.cfg.Configuration; public class HibernateSessionFactoryUtil { private static final SessionFactory sessionFactory = buildSessionFactory(); private static SessionFactory buildSessionFactory() { try { return new Configuration().configure().buildSessionFactory(); } catch (Throwable ex) { System.err.println("Initial SessionFactory creation failed." + ex); throw new ExceptionInInitializerError(ex); } } public static SessionFactory getSessionFactory() { return sessionFactory; } }

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • Heroku: bash: bundle: command not found

    - by Space Monkey
    My heroku deployment is crashing with following errors. 2012-12-12T17:16:18+00:00 app[web.1]: bash: bundle: command not found 2012-12-12T17:16:19+00:00 heroku[web.1]: Process exited with status 127 2012-12-12T17:16:19+00:00 heroku[web.1]: State changed from starting to crashed The Heroku documentation for this error is to set PATH and GEM variables as described in https://devcenter.heroku.com/articles/changing-ruby-version-breaks-path I tried that, however that too is not helping. ? heroku config:add PATH=bin:vendor/bundle/ruby/1.9.1/bin:/usr/local/bin:/usr/bin:/bin ? heroku config:add GEM_PATH=vendor/bundle/ruby/1.9.1 ? heroku run rake db:migrate Running rake db:migrate attached to terminal... up, run.7130 bash: bundle: command not found Next, I tried setting Ruby version in my Heroku app. This increased the slugsize. But app was still not up. Gemfile ruby "1.9.2" Pushed to Heroku -----> Using Ruby version: ruby-1.9.2 -----> Installing dependencies using Bundler version 1.2.2 heroku run "ruby -v" Running `ruby -v` attached to terminal... up, run.4483 ruby 1.9.2p320 (2012-04-20 revision 35421) [x86_64-linux] Can someone please advice

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >