Search Results

Search found 5650 results on 226 pages for 'ref counted pointer'.

Page 202/226 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • Infinite loop using Spring Security - Login page is protected even though it should allow anonymous

    - by Tai Squared
    I have a Spring application (Spring version 2.5.6.SEC01, Spring Security version 2.0.5) with the following setup: web.xml <welcome-file-list> <welcome-file> index.jsp </welcome-file> </welcome-file-list> The index.jsp page is in the WebContent directory and simply contains a redirect: <c:redirect url="/login.htm"/> In the appname-servlet.xml, there is a view resolver to point to the jsp pages in WEB-INF/jsp <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> <property name="prefix" value="/WEB-INF/jsp/" /> <property name="suffix" value=".jsp" /> </bean> In the security-config.xml file, I have the following configuration: <http> <!-- Restrict URLs based on role --> <intercept-url pattern="/WEB-INF/jsp/login.jsp*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/WEB-INF/jsp/header.jsp*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/WEB-INF/jsp/footer.jsp*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/login*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/index.jsp" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/logoutSuccess*" access="ROLE_ANONYMOUS" /> <intercept-url pattern="/css/**" filters="none" /> <intercept-url pattern="/images/**" filters="none" /> <intercept-url pattern="/**" access="ROLE_ANONYMOUS" /> <form-login login-page="/login.jsp"/> </http> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> However, I can't even navigate to the login page and get the following error in the log: WARNING: The login page is being protected by the filter chain, but you don't appear to have anonymous authentication enabled. This is almost certainly an error. I've tried changing the ROLE_ANONYMOUS to IS_AUTHENTICATED_ANONYMOUSLY, changing the login-page to index.jsp, login.htm, and adding different intercept-url values, but I can't get it so the login page is accesible and security applies to the other pages. What do I have to change to avoid this loop?

    Read the article

  • Windows Service Hosting WCF Objects over SSL (https) - Custom JSON Error Handling Doesn't Work

    - by bpatrick100
    I will first show the code that works in a non-ssl (http) environment. This code uses a custom json error handler, and all errors thrown, do get bubbled up to the client javascript (ajax). // Create webservice endpoint WebHttpBinding binding = new WebHttpBinding(); ServiceEndpoint serviceEndPoint = new ServiceEndpoint(ContractDescription.GetContract(Type.GetType(svcHost.serviceContract + ", " + svcHost.assemblyName)), binding, new EndpointAddress(svcHost.hostUrl)); // Add exception handler serviceEndPoint.Behaviors.Add(new FaultingWebHttpBehavior()); // Create host and add webservice endpoint WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri(svcHost.hostUrl)); webServiceHost.Description.Endpoints.Add(serviceEndPoint); webServiceHost.Open(); I'll also show you what the FaultingWebHttpBehavior class looks like: public class FaultingWebHttpBehavior : WebHttpBehavior { public FaultingWebHttpBehavior() { } protected override void AddServerErrorHandlers(ServiceEndpoint endpoint, EndpointDispatcher endpointDispatcher) { endpointDispatcher.ChannelDispatcher.ErrorHandlers.Clear(); endpointDispatcher.ChannelDispatcher.ErrorHandlers.Add(new ErrorHandler()); } public class ErrorHandler : IErrorHandler { public bool HandleError(Exception error) { return true; } public void ProvideFault(Exception error, MessageVersion version, ref Message fault) { // Build an object to return a json serialized exception GeneralFault generalFault = new GeneralFault(); generalFault.BaseType = "Exception"; generalFault.Type = error.GetType().ToString(); generalFault.Message = error.Message; // Create the fault object to return to the client fault = Message.CreateMessage(version, "", generalFault, new DataContractJsonSerializer(typeof(GeneralFault))); WebBodyFormatMessageProperty wbf = new WebBodyFormatMessageProperty(WebContentFormat.Json); fault.Properties.Add(WebBodyFormatMessageProperty.Name, wbf); } } } [DataContract] public class GeneralFault { [DataMember] public string BaseType; [DataMember] public string Type; [DataMember] public string Message; } The AddServerErrorHandlers() method gets called automatically, once webServiceHost.Open() gets called. This sets up the custom json error handler, and life is good :-) The problem comes, when we switch to and SSL (https) environment. I'll now show you endpoint creation code for SSL: // Create webservice endpoint WebHttpBinding binding = new WebHttpBinding(); ServiceEndpoint serviceEndPoint = new ServiceEndpoint(ContractDescription.GetContract(Type.GetType(svcHost.serviceContract + ", " + svcHost.assemblyName)), binding, new EndpointAddress(svcHost.hostUrl)); // This exception handler code below (FaultingWebHttpBehavior) doesn't work with SSL communication for some reason, need to resarch... // Add exception handler serviceEndPoint.Behaviors.Add(new FaultingWebHttpBehavior()); //Add Https Endpoint WebServiceHost webServiceHost = new WebServiceHost(svcHost.obj, new Uri(svcHost.hostUrl)); binding.Security.Mode = WebHttpSecurityMode.Transport; binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.None; webServiceHost.AddServiceEndpoint(svcHost.serviceContract, binding, string.Empty); Now, with this SSL endpoint code, the service starts up correctly, and wcf hosted objects can be communicated with just fine via client javascript. However, the custom error handler doesn't work. The reason is, the AddServerErrorHandlers() method never gets called when webServiceHost.Open() is run. So, can anyone tell me what is wrong with this picture? And why, is AddServerErrorHandlers() not getting called automatically, like it does when I'm using non-ssl endpoints? Thanks!

    Read the article

  • Disable Autocommit with Spring/Hibernate/C3P0/Atomikos ?

    - by HDave
    I have a JPA/Hibernate application and am trying to get it to run against H2 and MySQL. Currently I am using Atomikos for transactions and C3P0 for connection pooling. Despite my best efforts I am still seeing this in the log file (and DAO integration tests are failing with org.hibernate.NonUniqueObjectException even though Spring Test should be rolling back the database after each test method): [20100613 23:06:34] DEBUG [main] SessionFactoryImpl.(242) | instantiating session factory with properties: .....edited for brevity.... hibernate.connection.autocommit=true, ....more stuff follows The connection URL to H2 has AUTOCOMMIT=OFF, but according to the H2 documentation: this will not work as expected when using a connection pool (the connection pool manager will re-enable autocommit when returning the connection to the pool, so autocommit will only be disabled the first time the connection is used So I figured (apparently correctly) that Hibernate is where I'll have to indicate I want autocommit off. I found the autocommit property documented here and I put it in my EntityManagerFactory config as follows: <bean id="myappTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">com.atomikos.icatch.jta.hibernate3.AtomikosJTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> <prop key="hibernate.connection.autocommit">false</prop> <prop key="hibernate.format_sql">true"</prop> <prop key="hibernate.use_sql_comments">true</prop> </property> </bean>

    Read the article

  • JNDI / Classpath problem in glassfish

    - by Michael Borgwardt
    I am in the process of converting a large J2EE app from EJB 2 to EJB 3 (all stateless session beans, using glassfish 2.1.1), and running out of ideas. The first EJB I converted (let's call it Foo) ran without major problems (it was the only one in its ejb-module and I could completely replace the deployment descriptor with annotations) and the app ran fine. But after converting the second one (let's call it Bar, one of several in a different ejb-module) there is a weird combination of problems: The app deploys without errors (nothing in the logs either) There is an error when looking up Bar via JNDI When looking at the JNDI tree in the glassfish admin console, Bar is not present at all. Then when I look at the logs, I see this (Foo is the correct name of the EJB's remote interface of the first converted, previously working EJB): Caused by: javax.naming.NamingException: ejb ref resolution error for remote business interface Foo [Root exception is java.lang.ClassNotFoundException: Foo] at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:425) at com.sun.ejb.containers.RemoteBusinessObjectFactory.getObjectInstance(RemoteBusinessObjectFactory.java:74) at javax.naming.spi.NamingManager.getObjectInstance(NamingManager.java:304) at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:414) at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:603) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more Caused by: java.lang.ClassNotFoundException: XXX at org.apache.catalina.loader.WebappClassLoader.loadClass(WebappClassLoader.java:1578) at com.sun.ejb.EJBUtils.getBusinessIntfClassLoader(EJBUtils.java:679) at com.sun.ejb.EJBUtils.lookupRemote30BusinessObject(EJBUtils.java:348) ... 75 more This is followed by more exceptions for all the entries that (like Foo) do appear in the JNDI tree. These look like this: Caused by: javax.naming.NotContextException: BarHome cannot be listed at com.sun.enterprise.naming.SerialContext.list(SerialContext.java:607) at javax.naming.InitialContext.list(InitialContext.java:395) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanHelper.getJndiEntriesByContextPath(JndiMBeanHelper.java:106) at com.sun.enterprise.admin.monitor.jndi.JndiMBeanImpl.getNames(JndiMBeanImpl.java:231) ... 68 more However, no exception for Bar, it does not appear in the log at all except one entry during deployment. The other EJBs in the same module do appear, as does Foo. Any ideas what could cause this or how to diagnose it further? The beans are pretty straightforward: @Stateless(name = "Foo") @RolesAllowed("FOOUSER") @TransactionAttribute(TransactionAttributeType.SUPPORTS) public class FooImpl extends BaseBean implements Foo { I'm also having some problems with the deployment descriptor for Bar (I'd like to eliminate it, but glassfish doesn't seem to like having a bean appear only in sun-ejb-jar.xml, or having some beans in a module declared in the descriptor and others use only annotations), but I can't see how that could cause the ClassNotFoundException on Foo. Is there a way to see the ClassPath that Glassfish is actually using?

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • Spring <jee:remote-slsb> and JBoss AS7 - No EJB receiver available for handling

    - by Lech Glowiak
    I have got @Remote EJB on JBoss AS 7, available by name java:global/RandomEjb/DefaultRemoteRandom!pl.lechglowiak.ejbTest.RemoteRandom. Standalone client is Spring application that uses <jee:remote-slsb> bean. When trying to use that bean I get java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:, moduleName:RandomEjb, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext@1a89031. Here is relevant part of applicationContext.xml: <jee:remote-slsb id="remoteRandom" jndi-name="RandomEjb/DefaultRemoteRandom!pl.lechglowiak.ejbTest.RemoteRandom" business-interface="pl.lechglowiak.ejbTest.RemoteRandom" <jee:environment> java.naming.factory.initial=org.jboss.naming.remote.client.InitialContextFactory java.naming.provider.url=remote://localhost:4447 jboss.naming.client.ejb.context=true java.naming.security.principal=testuser java.naming.security.credentials=testpassword </jee:environment> </jee:remote-slsb> <bean id="remoteClient" class="pl.lechglowiak.RemoteClient"> <property name="remote" ref="remoteRandom" /> </bean> RemoteClient.java public class RemoteClient { private RemoteRandom random; public void setRemote(RemoteRandom random){ this.random = random; } public Integer callRandom(){ try { return random.getRandom(100); } catch (Exception e) { e.printStackTrace(); return null; } } } My jboss client jar: org.jboss.as jboss-as-ejb-client-bom 7.1.2.Final pom pl.lechglowiak.ejbTest.RemoteRandom is available for client application classpath. jndi.properties contains exact properties as in <jee:environment> of <jee:remote-slsb>. Such code runs without exception: Context ctx2 = new InitialContext(); RemoteRandom rr = (RemoteRandom) ctx2.lookup("RandomEjb/DefaultRemoteRandom!pl.lechglowiak.ejbTest.RemoteRandom"); System.out.println(rr.getRandom(10000)); But this: ApplicationContext ctx = new ClassPathXmlApplicationContext("applicationContext.xml"); RemoteClient client = ctx.getBean("remoteClient", RemoteClient.class); System.out.println(client.callRandom()); ends with exception: java.lang.IllegalStateException: EJBCLIENT000025: No EJB receiver available for handling [appName:, moduleName:RandomEjb, distinctName:] combination for invocation context org.jboss.ejb.client.EJBClientInvocationContext@1a89031. jboss.naming.client.ejb.context=true is set. Do you have any idea what am I setting wrong in <jee:remote-slsb>?

    Read the article

  • How to configure maximum number of transport channels in WCF using basicHttpBinding?

    - by Hemant
    Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • Extracting the source code of a facebook page with JavaScript

    - by Hafizi Vilie
    If I write code in the JavaScript console of Chrome, I can retrieve the whole HTML source code by entering: var a = document.body.InnerHTML; alert(a); For fb_dtsg on Facebook, I can easily extract it by writing: var fb_dtsg = document.getElementsByName('fb_dtsg')[0].value; Now, I am trying to extract the code "h=AfJSxEzzdTSrz-pS" from the Facebook Page. The h value is especially useful for Facebook reporting. How can I get the h value for reporting? I don't know what the h value is; the h value is totally different when you communicate with different users. Without that h correct value, you can not report. Actually, the h value is AfXXXXXXXXXXX (11 character values after 'Af'), that is what I know. Do you have any ideas for getting the value or any function to generate on Facebook page. The Facebook Source snippet is below, you can view source on facebook profile, and search h=Af, you will get the value: <code class="hidden_elem" id="ukftg4w44"> <!-- <div class="mtm mlm"> ... .... <span class="itemLabel fsm">Unfriend...</span></a></li> <li class="uiMenuItem" data-label="Report/Block..."> <a class="itemAnchor" role="menuitem" tabindex="-1" href="/ajax/report/social.php?content_type=0&amp;cid=1352686914&amp;rid=1352686914&amp;ref=http%3A%2F%2Fwww.facebook.com%2 F%3Fq&amp;h=AfjSxEzzdTSrz-pS&amp;from_gear=timeline" rel="dialog"> <span class="itemLabel fsm">Report/Block...</span></a></li></ul></div> ... .... </div> --> </code> Please guide me. How can extract the value exactly? I tried with following code, but the comment block prevent me to extract the code. How can extract the value which is inside comment block? var a = document.getElementsByClassName('hidden_elem')[3].innerHTML;alert(a);

    Read the article

  • Implementing EAV pattern with Hibernate for User -> Settings relationship

    - by Trevor
    I'm trying to setup a simple EAV pattern in my web app using Java/Spring MVC and Hibernate. I can't seem to figure out the magic behind the hibernate XML setup for this scenario. My database table "SETUP" has three columns: user_id (FK) setup_item setup_value The database composite key is made up of user_id | setup_item Here's the Setup.java class: public class Setup implements CommonFormElements, Serializable { private Map data = new HashMap(); private String saveAction; private Integer speciesNamingList; private User user; Logger log = LoggerFactory.getLogger(Setup.class); public String getSaveAction() { return saveAction; } public void setSaveAction(String action) { this.saveAction = action; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public Integer getSpeciesNamingList() { return speciesNamingList; } public void setSpeciesNamingList(Integer speciesNamingList) { this.speciesNamingList = speciesNamingList; } public Map getData() { return data; } public void setData(Map data) { this.data = data; } } My problem with the Hibernate setup, is that I can't seem to figure out how to map out the fact that a foreign key and the key of a map will construct the composite key of the table... this is due to a lack of experience using Hibernate. Here's my initial attempt at getting this to work: <composite-id> <key-many-to-one foreign-key="id" name="user" column="user_id" class="Business.User"> <meta attribute="use-in-equals">true</meta> </key-many-to-one> </composite-id> <map lazy="false" name="data" table="setup"> <key column="user_id" property-ref="user"/> <composite-map-key class="Command.Setup"> <key-property name="data" column="setup_item" type="string"/> </composite-map-key> <element column="setup_value" not-null="true" type="string"/> </map> Any insight into how to properly map this common scenario would be most appreciated!

    Read the article

  • How to implement login page using Spring Security so that it works with Spring web flow?

    - by simon
    I have a web application using Spring 2.5.6 and Spring Security 2.0.4. I have implemented a working login page, which authenticates the user against a web service. The authentication is done by defining a custom authentincation manager, like this: <beans:bean id="customizedFormLoginFilter" class="org.springframework.security.ui.webapp.AuthenticationProcessingFilter"> <custom-filter position="AUTHENTICATION_PROCESSING_FILTER" /> <beans:property name="defaultTargetUrl" value="/index.do" /> <beans:property name="authenticationFailureUrl" value="/login.do?error=true" /> <beans:property name="authenticationManager" ref="customAuthenticationManager" /> <beans:property name="allowSessionCreation" value="true" /> </beans:bean> <beans:bean id="customAuthenticationManager" class="com.sevenp.mobile.samplemgmt.web.security.CustomAuthenticationManager"> <beans:property name="authenticateUrlWs" value="${WS_ENDPOINT_ADDRESS}" /> </beans:bean> The authentication manager class: public class CustomAuthenticationManager implements AuthenticationManager, ApplicationContextAware { @Transactional @Override public Authentication authenticate(Authentication authentication) throws AuthenticationException { //authentication logic return new UsernamePasswordAuthenticationToken(principal, authentication.getCredentials(), grantedAuthorityArray); } The essential part of the login jsp looks like this: <c:url value="/j_spring_security_check" var="formUrlSecurityCheck"/> <form method="post" action="${formUrlSecurityCheck}"> <div id="errorArea" class="errorBox"> <c:if test="${not empty param.error}"> ${sessionScope["SPRING_SECURITY_LAST_EXCEPTION"].message} </c:if> </div> <label for="loginName"> Username: <input style="width:125px;" tabindex="1" id="login" name="j_username" /> </label> <label for="password"> Password: <input style="width:125px;" tabindex="2" id="password" name="j_password" type="password" /> </label> <input type="submit" tabindex="3" name="login" class="formButton" value="Login" /> </form> Now the problem is that the application should use Spring Web Flow. After the application was configured to use Spring Web Flow, the login does not work anymore - the form action to "/j_spring_security_check" results in a blank page without error message. What is the best way to adapt the existing login process so that it works with Spring Web Flow?

    Read the article

  • How to update strongly typed Html.DropDownList using Jquery

    - by Remnant
    I have a webpage with two radiobuttons and a dropdownlist as follows: <div class="sectionheader">Course <div class="dropdown"><%=Html.DropDownList("CourseSelection", Model.CourseList, new { @class = "dropdown" })%> </div> <div class="radiobuttons"><label><%=Html.RadioButton("CourseType", "Advanced", false )%> Advanced </label></div> <div class="radiobuttons"><label><%=Html.RadioButton("CourseType", "Beginner", true )%> Beginner </label></div> </div> The dropdownlist is strongly typed and populated with Model.CourseList (NB - on the first page load, 'Beginner' is the default selection and the dropdown shows the beginner course options accordingly) What I want to be able to do is to update the DropDownList based on which radiobutton is selected i.e. if 'Advanced' selected then show one list of course options in dropdown, and if 'Beginner' selected then show another list of courses. The code I would like to call sits within my Controller: public JsonResult UpdateDropDown(string courseType) { IDropDownList dropdownlistRepository = new DropDownListRepository(); IEnumerable<SelectListItem> courseList = dropdownlistRepository.GetCourseList(courseType); return Json(courseList); } Edit - Updated below to show latest position Using examples provided in jQuery in Action, I now have the following jQuery code: $('.radiobuttons input:radio').click(function() { var courseType = $(this).val(); //Get selected courseType from radiobutton var dropdownList = $(".dropdown"); //Ref for dropdownlist $.getJSON("/ByCourse/UpdateDropDown", { courseType: courseType }, function(data) { $(dropdownList).loadSelect(data); }); }); The loadSelect function is taken straight from the book and is as follows: (function($) { $.fn.emptySelect = function() { return this.each(function() { if (this.tagName == 'SELECT') this.options.length = 0; }); } $.fn.loadSelect = function(optionsDataArray) { return this.emptySelect().each(function() { if (this.tagName == 'SELECT') { var selectElement = this; $.each(optionsDataArray, function(index, optionData) { var option = new Option(optionData.Text, optionData.Value); if ($.browser.msie) { selectElement.add(option); } else { selectElement.add(option, null); } }); } }); } })(jQuery); 1 day+ later I still cannot get this to work. Assuming the jQuery code is correct then I can only think that the issue is with retrieving the actual data with $getJSON. I have verified that JsonResult UpdateDropDown does actually retrieve valid data. What am I missing? Assembly reference? (NB: I have MicrosoftAjax.js and MicrosoftMvcAjax.js in my head tags of the master page Should JsonResult be ActionResult? (I have seen both used in samples on web) Do I need to register route Controller/UpdateDropDown in Global.asax? Any further guidance would be appreciated.

    Read the article

  • EntityManagerFactory error on Websphere

    - by neverland
    I have a very weird problem. Have an application using Hibernate and spring.I have an entitymanger defined which uses a JNDI lookup .It looks something like this <bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="persistenceUnitName" value="ConfigAPPPersist" /> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <property name="generateDdl" value="false" /> <property name="databasePlatform" value="org.hibernate.dialect.Oracle9Dialect" /> </bean> </property> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.WebSphereDataSourceAdapter"> <property name="targetDataSource"> <bean class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName" value="jdbc/pmp" /> </bean> </property> </bean> This application runs fine in DEV. But when we move to higher envs the team that deploys this application does it successfully initially but after a few restarts of the application the entitymanager starts giving this problem Caused by: javax.persistence.PersistenceException: [PersistenceUnit: ConfigAPPPersist] Unable to build EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.buildEntityManagerFactory(Ejb3Configuration.java:677) at org.hibernate.ejb.HibernatePersistence.createContainerEntityManagerFactory(HibernatePersistence.java:132) at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:224) at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:291) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1368) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1334) ... 32 more Caused by: org.hibernate.MappingException: **property mapping has wrong number of columns**: com.***.***.jpa.marketing.entity.MarketBrands.$performasure_j2eeInfo type: object Now you would say this is pretty obvious the entity MarketBrands is incorrect. But its not it maps to the table just fine. And the same code works on DEV. Also the jndi cannot be incorrect since it deploys and works fine initially but throws uo this error after a restart. This is weird and not very logical. But if someone has faced this or has any idea on what might be causing this Please!! help The persistence.xml for the persitence unit has very little <?xml version="1.0" encoding="UTF-8"?> <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" version="1.0"> <persistence-unit name="ConfigAPPPersist"> <!-- commented code --> </persistence-unit> </persistence>

    Read the article

  • ExecuteNonQuery on a stored proc causes it to be deleted

    - by FinancialRadDeveloper
    This is a strange one. I have a Dev SQL Server which has the stored proc on it, and the same stored proc when used with the same code on the UAT DB causes it to delete itself! Has anyone heard of this behaviour? SQL Code: -- Check if user is registered with the system IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL BEGIN DROP PROCEDURE dbo.sp_is_valid_user IF OBJECT_ID('dbo.sp_is_valid_user') IS NOT NULL PRINT '<<< FAILED DROPPING PROCEDURE dbo.sp_is_valid_user >>>' ELSE PRINT '<<< DROPPED PROCEDURE dbo.sp_is_valid_user >>>' END go create procedure dbo.sp_is_valid_user @username as varchar(20), @isvalid as int OUTPUT AS BEGIN declare @tmpuser as varchar(20) select @tmpuser = username from CPUserData where username = @username if @tmpuser = @username BEGIN select @isvalid = 1 END else BEGIN select @isvalid = 0 END END GO Usage example DECLARE @isvalid int exec dbo.sp_is_valid_user 'username', @isvalid OUTPUT SELECT valid = @isvalid The usage example work all day... when I access it via C# it deletes itself in the UAT SQL DB but not the Dev one!! C# Code: public bool IsValidUser(string sUsername, ref string sErrMsg) { string sDBConn = ConfigurationSettings.AppSettings["StoredProcDBConnection"]; SqlCommand sqlcmd = new SqlCommand(); SqlDataAdapter sqlAdapter = new SqlDataAdapter(); try { SqlConnection conn = new SqlConnection(sDBConn); sqlcmd.Connection = conn; conn.Open(); sqlcmd.CommandType = CommandType.StoredProcedure; sqlcmd.CommandText = "sp_is_valid_user"; // params to pass in sqlcmd.Parameters.AddWithValue("@username", sUsername); // param for checking success passed back out sqlcmd.Parameters.Add("@isvalid", SqlDbType.Int); sqlcmd.Parameters["@isvalid"].Direction = ParameterDirection.Output; sqlcmd.ExecuteNonQuery(); int nIsValid = (int)sqlcmd.Parameters["@isvalid"].Value; if (nIsValid == 1) { conn.Close(); sErrMsg = "User Valid"; return true; } else { conn.Close(); sErrMsg = "Username : " + sUsername + " not found."; return false; } } catch (Exception e) { sErrMsg = "Error :" + e.Source + " msg: " + e.Message; return false; } }

    Read the article

  • How to create JPA EntityMananger in Spring Context?

    - by HDave
    I have a JPA/Spring application that uses Hibernate as the JPA provider. In one portion of the code, I have to manually create a DAO in my application with the new operator rather than use Spring DI. When I do this, the @PersistenceContext annotation is not processed by Spring. In my code where I create the DAO I have an EntityManagerFactory which I used to set the entityManager as follows: @PersistenceUnit private EntityManagerFactory entityManagerFactory; MyDAO dao = new MyDAOImpl(); dao.setEntityManager(entityManagerFactory.createEntityManager()); The problem is that when I do this, I get a Hibernate error: Could not find UserTransaction in JNDI [java:comp/UserTransaction] It's as if the HibernateEntityManager never received all the settings I've configured in Spring: <bean id="myAppTestLocalEmf" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"> <property name="persistenceUnitName" value="myapp-core" /> <property name="persistenceUnitPostProcessors"> <bean class="com.myapp.core.persist.util.JtaPersistenceUnitPostProcessor"> <property name="jtaDataSource" ref="myappPersistTestJdbcDataSource" /> </bean> </property> <property name="jpaProperties"> <props> <prop key="hibernate.transaction.factory_class">org.hibernate.transaction.JTATransactionFactory</prop> <prop key="hibernate.transaction.manager_lookup_class">com.atomikos.icatch.jta.hibernate3.TransactionManagerLookup</prop> </props> </property> <property name="jpaVendorAdapter"> <bean class="org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"> <property name="showSql" value="true" /> <!-- The following use the PropertyPlaceholderConfigurer but it doesn't work in Eclipse --> <property name="database" value="$DS{hibernate.database}" /> <property name="databasePlatform" value="$DS{hibernate.dialect}" /> I am not sure, but I think the issue might be that I am not creating the entity manager correctly with the plain vanilla createEntityManager() call rather than passing in a map of properties. I say this because when Spring's LocalContainerEntityManagerFactoryBean proxy's the call to Hibernate's createEntityManager() all of the Spring configured options are missing. That is, there is no Map argument to the createEntityManager() call. Perhaps it is another problem entirely however....not sure!

    Read the article

  • How to declare and implement a COM interface on C# that inherits from another COM interface?

    - by Carlos Loth
    I'm trying to understand what is the correct why to implement COM interfaces from C# code. It is straightforward when the interface doesn't inherit from other base interface. Like this one: [ComImport, Guid("2047E320-F2A9-11CE-AE65-08002B2E1262"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IShellFolderViewCB { long MessageSFVCB(uint uMsg, int wParam, int lParam); } However things start to become weired when I need to implement an interface that inherits from other COM interfaces. For example, if I implement the IPersistFolder2 interface which inherits from IPersistFolder which inherits from IPersist as I usually on C# code: [ComImport, Guid("0000010c-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersist { void GetClassID([Out] out Guid classID); } [ComImport, Guid("000214EA-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersistFolder : IPersist { void Initialize([In] IntPtr pidl); } [ComImport, Guid("1AC3D9F0-175C-11d1-95BE-00609797EA4F"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] public interface IPersistFolder2 : IPersistFolder { void GetCurFolder([Out] out IntPtr ppidl); } The operating system is not able to call the methods on my object implementation. When I'm debugging I can see the constructor of my IPersistFolder2 implementation is called many times, however the interface methods I've implemented aren't called. I'm implementing the IPersistFolder2 as follows: [Guid("A4603CDB-EC86-4E40-80FE-25D5F5FA467D")] public class PersistFolder: IPersistFolder2 { void IPersistFolder2.GetClassID(ref Guid classID) { ... } void IPersistFolder2.Initialize(IntPtr pidl) { ... } void IPersistFolder2.GetCurFolder(out IntPtr ppidl) { ... } } What seems strange is when I declare the COM interface imports as follow, it works: [ComImport, Guid("0000010c-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface IPersist { void GetClassID([Out] out Guid classID); } [ComImport, Guid("000214EA-0000-0000-C000-000000000046"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface IPersistFolder : IPersist { new void GetClassID([Out] out Guid classID); void Initialize([In] IntPtr pidl); } [ComImport, Guid("1AC3D9F0-175C-11d1-95BE-00609797EA4F"), InterfaceType(ComInterfaceType.InterfaceIsIUnknown)] internal interface IPersistFolder2 : IPersistFolder { new void GetClassID([Out] out Guid classID); new void Initialize([In] IntPtr pidl); void GetCurFolder([Out] out IntPtr ppidl); } I don't know why it works when I declare the COM interfaces that way (hidding the base interface methods using new). Maybe it is related to the way IUnknown works. Does anyone know what is the correct way of implementing COM interfaces in C# that inherits from other COM interfaces and why?

    Read the article

  • Spring AOP pointcut that matches annotation on interface

    - by seanizer
    Hello, this is my first post here, so I apologize in advance for any stupidity on my side. I have a service class implemented in Java 6 / Spring 3 that needs an annotation to restrict access by role. I have defined an annotation called RequiredPermission that has as its value attribute one or more values from an enum called OperationType: public @interface RequiredPermission { /** * One or more {@link OperationType}s that map to the permissions required * to execute this method. * * @return */ OperationType[] value();} public enum OperationType { TYPE1, TYPE2; } package com.mycompany.myservice; public interface MyService{ @RequiredPermission(OperationType.TYPE1) void myMethod( MyParameterObject obj ); } package com.mycompany.myserviceimpl; public class MyServiceImpl implements MyService{ public myMethod( MyParameterObject obj ){ // do stuff here } } I also have the following aspect definition: /** * Security advice around methods that are annotated with * {@link RequiredPermission}. * * @param pjp * @param param * @param requiredPermission * @return * @throws Throwable */ @Around(value = "execution(public *" + " com.mycompany.myserviceimpl.*(..))" + " && args(param)" + // parameter object " && @annotation( requiredPermission )" // permission annotation , argNames = "param,requiredPermission") public Object processRequest(final ProceedingJoinPoint pjp, final MyParameterObject param, final RequiredPermission requiredPermission) throws Throwable { if(userService.userHasRoles(param.getUsername(),requiredPermission.values()){ return pjp.proceed(); }else{ throw new SorryButYouAreNotAllowedToDoThatException( param.getUsername(),requiredPermission.value()); } } The parameter object contains a user name and I want to look up the required role for the user before allowing access to the method. When I put the annotation on the method in MyServiceImpl, everything works just fine, the pointcut is matched and the aspect kicks in. However, I believe the annotation is part of the service contract and should be published with the interface in a separate API package. And obviously, I would not like to put the annotation on both service definition and implementation (DRY). I know there are cases in Spring AOP where aspects are triggered by annotations one interface methods (e.g. Transactional). Is there a special syntax here or is it just plain impossible out of the box. PS: I have not posted my spring config, as it seems to be working just fine. And no, those are neither my original class nor method names. Thanks in advance, Sean PPS: Actually, here is the relevant part of my spring config: <aop:aspectj-autoproxy proxy-target-class="false" /> <bean class="com.mycompany.aspect.MyAspect"> <property name="userService" ref="userService" /> </bean>

    Read the article

  • Insertions into Zipper trees on XML files in Clojure

    - by ivar
    I'm confused as how to idiomatically change a xml tree accessed through clojure.contrib's zip-filter.xml. Should be trying to do this at all, or is there a better way? Say that I have some dummy xml file "itemdb.xml" like this: <itemlist> <item id="1"> <name>John</name> <desc>Works near here.</desc> </item> <item id="2"> <name>Sally</name> <desc>Owner of pet store.</desc> </item> </itemlist> And I have some code: (require '[clojure.zip :as zip] '[clojure.contrib.duck-streams :as ds] '[clojure.contrib.lazy-xml :as lxml] '[clojure.contrib.zip-filter.xml :as zf]) (def db (ref (zip/xml-zip (lxml/parse-trim (java.io.File. "itemdb.xml"))))) ;; Test that we can traverse and parse. (doall (map #(print (format "%10s: %s\n" (apply str (zf/xml-> % :name zf/text)) (apply str (zf/xml-> % :desc zf/text)))) (zf/xml-> @db :item))) ;; I assume something like this is needed to make the xml tags (defn create-item [name desc] {:tag :item :attrs {:id "3"} :contents (list {:tag :name :attrs {} :contents (list name)} {:tag :desc :attrs {} :contents (list desc)})}) (def fred-item (create-item "Fred" "Green-haired astrophysicist.")) ;; This disturbs the structure somehow (defn append-item [xmldb item] (zip/insert-right (-> xmldb zip/down zip/rightmost) item)) ;; I want to do something more like this (defn append-item2 [xmldb item] (zip/insert-right (zip/rightmost (zf/xml-> xmldb :item)) item)) (dosync (alter db append-item2 fred-item)) ;; Save this simple xml file with some added stuff. (ds/spit "appended-itemdb.xml" (with-out-str (lxml/emit (zip/root @db) :pad true))) I am unclear about how to use the clojure.zip functions appropriately in this case, and how that interacts with zip-filter. If you spot anything particularly weird in this small example, please point it out.

    Read the article

  • How do I refactor these two C# functions to abstrtact their logic from the specific class properties

    - by ObligatoryMoniker
    I have two functions whose underlying logic is the same but in one case it sets one property value on a class and in another case it sets a different one. How can I rewrite the following two functions to abstract away as much of the algorithm as possible so that I can make changes in logic in a single place? SetBillingAddress private void SetBillingAddress(OrderAddress newBillingAddress) { BasketHelper basketHelper = new BasketHelper(SiteConstants.BasketName); OrderAddress oldBillingAddress = basketHelper.Basket.Addresses[basketHelper.BillingAddressID]; bool NewBillingAddressIsNotOldBillingAddress = ((oldBillingAddress == null) || (newBillingAddress.OrderAddressId != oldBillingAddress.OrderAddressId)); bool BillingAddressHasBeenPreviouslySet = (oldBillingAddress != null); bool BillingAddressIsNotSameAsShippingAddress = (basketHelper.ShippingAddressID != basketHelper.BillingAddressID); bool NewBillingAddressIsNotShippingAddress = (newBillingAddress.OrderAddressId != basketHelper.ShippingAddressID); if (NewBillingAddressIsNotOldBillingAddress && BillingAddressHasBeenPreviouslySet && BillingAddressIsNotSameAsShippingAddress) { basketHelper.Basket.Addresses.Remove(oldBillingAddress); } if (NewBillingAddressIsNotOldBillingAddress && NewBillingAddressIsNotShippingAddress) { basketHelper.Basket.Addresses.Add(newBillingAddress); } basketHelper.BillingAddressID = newBillingAddress.OrderAddressId; basketHelper.Basket.Save(); } And here is the second one: SetShippingAddress private void SetBillingAddress(OrderAddress newShippingAddress) { BasketHelper basketHelper = new BasketHelper(SiteConstants.BasketName); OrderAddress oldShippingAddress = basketHelper.Basket.Addresses[basketHelper.ShippingAddressID]; bool NewShippingAddressIsNotOldShippingAddress = ((oldShippingAddress == null) || (newShippingAddress.OrderAddressId != oldShippingAddress.OrderAddressId)); bool ShippingAddressHasBeenPreviouslySet = (oldShippingAddress != null); bool ShippingAddressIsNotSameAsBillingAddress = (basketHelper.ShippingAddressID != basketHelper.BillingAddressID); bool NewShippingAddressIsNotBillingAddress = (newShippingAddress.OrderAddressId != basketHelper.BillingAddressID); if (NewShippingAddressIsNotOldShippingAddress && ShippingAddressHasBeenPreviouslySet && ShippingAddressIsNotSameAsBillingAddress) { basketHelper.Basket.Addresses.Remove(oldShippingAddress); } if (NewShippingAddressIsNotOldShippingAddress && NewShippingAddressIsNotBillingAddress) { basketHelper.Basket.Addresses.Add(newShippingAddress); } basketHelper.ShippingAddressID = newShippingAddress.OrderAddressId; basketHelper.Basket.Save(); } My initial thought was that if I could pass a class's property by refernce then I could rewrite the previous functions into something like private void SetPurchaseOrderAddress(OrderAddress newAddress, ref String CurrentChangingAddressIDProperty) and then call this function and pass in either basketHelper.BillingAddressID or basketHelper.ShippingAddressID as CurrentChangingAddressIDProperty but since I can't pass C# properties by reference I am not sure what to do with this code to be able to reuse the logic in both places. Thanks for any insight you can give me.

    Read the article

  • Interop.Outlook.UserProperties.Add causing problem during connection time

    - by Akie
    Hi All, I have created a plug-in for outlook. Plug-in has only below code. private void OnNewOutlookInspector(Outlook.Inspector OutlookInsptr) { Outlook.MailItem MlItem = (Outlook.MailItem)OutlookInsptr.CurrentItem; //if I remove below line. Everything is working fine. MlItem.UserProperties.Add("INSPINIT", Outlook.OlUserPropertyType.olText , true , true ).Value = "1"; } public void OnConnection(object application, Extensibility.ext_ConnectMode connectMode, object addInInst, ref System.Array custom) { applicationObject = application; addInInstance = addInInst; MessageBox.Show("in connection new 2"); OutlkApp = (Outlook.Application)application; OutlkInsptrs = OutlkApp.Inspectors; OutlkInsptrs.NewInspector += new Outlook.InspectorsEvents_NewInspectorEventHandler(OnNewOutlookInspector); } Problem I am facing is, When I send HTML mail while plug-in is enabled, receiving end it is being received as a plain text. Below is the mail content along with the header and body at recieving end. x-sender: [email protected] x-receiver: [email protected] Received: from blr-s-07.pointcrossblr.com ([192.168.1.107]) by blr-ws-134.pointcrossblr.com with Microsoft SMTPSVC(6.0.2600.5949); Wed, 22 Dec 2010 17:11:02 +0530 Received: from blrws134 ([192.168.1.175]) by blr-s-07.pointcrossblr.com with Microsoft SMTPSVC(6.0.3790.4675); Wed, 22 Dec 2010 17:11:02 +0530 From: "Ashif Nataliya" <[email protected]> To: <[email protected]> Cc: <[email protected]> Subject: RTF FRM blr to pc.com cc blr-ws-134 Date: Wed, 22 Dec 2010 17:11:02 +0530 Message-ID: <[email protected]> MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_NextPart_000_00F7_01CBA1FB.36115580" X-Mailer: Microsoft Outlook 14.0 Content-Language: en-us X-MS-TNEF-Correlator: 00000000DCB2344DE8F50F4FBC91085BB5C06D55A4172000 thread-index: AcuhzRuTOBkvHPUnS1aLi9+cHNAWhA== Return-Path: [email protected] X-OriginalArrivalTime: 22 Dec 2010 11:41:02.0822 (UTC) FILETIME=[1C788860:01CBA1CD] This is a multipart message in MIME format. ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit HTML Test Test Mail ------=_NextPart_000_00F7_01CBA1FB.36115580 Content-Type: application/ms-tnef; name="winmail.dat" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="winmail.dat" // and some other code..... Any help is appreciated. Thanks.

    Read the article

  • Long Running Stored Proc - Report Progress Using BackgroundWorker & Timer

    - by daveywc
    While a long running stored proc (RMR_Seek) is executing (called via a Linq-To-SQL data context) I am trying to call another stored proc (RMR_GetLatestModelMessage) to check a table for the latest status message. The long running stored proc updates the table in question with status messages as it executes. I want to display the status message on a message panel to advise the user of the status of the execution of Proc_A. For various reasons it is not possible to determine how long RMR_Seek will take to execute so a progress bar with percentage increments is not feasible. I thought I'd found the way to do it by calling the long running stored proc from in a BackgroundWorker process DoWork event handler. This worked fine and allowed me to update my message panel with some dummy status messages that were NOT obtained via Proc_B while Proc_A was running. However now that I have tried to implement this fully by calling Proc_B to obtain the status messages I am running into problems that seem to be related to the mix of the backgroundworker and my System.Windows.Forms.Timer. An extract of the code I am using is below. I have tried many different ways around this but each one seems to present its own set of problems. The code below is problematic in the bw_DoWork event. The RMR_Seek stored proc gets called but does not execute properly - it also seems to be inconsistent as to whether _IsCompleted gets set to true. I'm sure there is a better way to achieve what I am trying to do. private bool _IsCompleted; private void RunRevenueSeek() { if (_SelectedModel == null) { MessageBox.Show("Please select a model from the list and try again.", "Model Generation", MessageBoxButtons.OK, MessageBoxIcon.Information); } else { var bw = new BackgroundWorker(); bw.DoWork += new DoWorkEventHandler(bw_DoWork); ProgressPanelControl.Visible = true; _IsCompleted = false; MessageTimer.Start(); // Has an interval of 3000 bw.RunWorkerAsync(); ProgressLabelControl.Text = "Refreshing Data"; this.Update(); ...more code goes here } } private void bw_DoWork(object sender, DoWorkEventArgs e) { using (var dc = new RevMdlrDataClassesDataContext()) { dc.CommandTimeout = 300; dc.RMR_Seek(_SelectedModel.ModelSet_ID); _IsCompleted = true; } } private void MessageTimer_Tick(object sender, EventArgs e) { string message = ""; if (_IsCompleted) { MessageTimer.Stop(); } else { using (var dc = new RevMdlrDataClassesDataContext()) { dc.CommandTimeout = 300; dc.RMR_GetLatestModelMessage(_SelectedModel.ModelSet_ID, ref message); ProgressLabelControl.Text = message; this.Update(); } } }

    Read the article

  • Unit finalization order for application, compiled with run-time packages?

    - by Alexander
    I need to execute my code after finalization of SysUtils unit. I've placed my code in separate unit and included it first in uses clause of dpr-file, like this: project Project1; uses MyUnit, // <- my separate unit SysUtils, Classes, SomeOtherUnits; procedure Test; begin // end; begin SetProc(Test); end. MyUnit looks like this: unit MyUnit; interface procedure SetProc(AProc: TProcedure); implementation var Test: TProcedure; procedure SetProc(AProc: TProcedure); begin Test := AProc; end; initialization finalization Test; end. Note that MyUnit doesn't have any uses. This is usual Windows exe, no console, without forms and compiled with default run-time packages. MyUnit is not part of any package (but I've tried to use it from package too). I expect that finalization section of MyUnit will be executed after finalization section of SysUtils. This is what Delphi's help tells me. However, this is not always the case. I have 2 test apps, which differs a bit by code in Test routine/dpr-file and units, listed in uses. MyUnit, however, is listed first in all cases. One application is run as expected: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization - MyUnit's finalization - ...other units... But the second is not. MyUnit's finalization is invoked before SysUtils's finalization. The actual call chain looks like this: Halt0 - FinalizeUnits - ...other units... - SysUtils's finalization (skipped) - MyUnit's finalization - ...other units... - SysUtils's finalization (executed) Both projects have very similar settings. I tried a lot to remove/minimize their differences, but I still do not see a reason for this behaviour. I've tried to debug this and found out that: it seems that every unit have some kind of reference counting. And it seems that InitTable contains multiply references to the same unit. When SysUtils's finalization section is called first time - it change reference counter and do nothing. Then MyUnit's finalization is executed. And then SysUtils is called again, but this time ref-count reaches zero and finalization section is executed: Finalization: // SysUtils' finalization 5003B3F0 55 push ebp // here and below is some form of stub 5003B3F1 8BEC mov ebp,esp 5003B3F3 33C0 xor eax,eax 5003B3F5 55 push ebp 5003B3F6 688EB50350 push $5003b58e 5003B3FB 64FF30 push dword ptr fs:[eax] 5003B3FE 648920 mov fs:[eax],esp 5003B401 FF05DCAD1150 inc dword ptr [$5011addc] // here: some sort of reference counter 5003B407 0F8573010000 jnz $5003b580 // <- this jump skips execution of finalization for first call 5003B40D B8CC4D0350 mov eax,$50034dcc // here and below is actual SysUtils' finalization section ... Can anyone can shred light on this issue? Am I missing something?

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • How to dispose of a NET COM interop object on Release()

    - by mhenry1384
    I have a COM object written in managed code (C++/CLI). I am using that object in standard C++. How do I force my COM object's destructor to be called immediately when the COM object is released? If that's not possible, call I have Release() call a MyDispose() method on my COM object? My code to declare the object (C++/CLI): [Guid("57ED5388-blahblah")] [InterfaceType(ComInterfaceType::InterfaceIsIDispatch)] [ComVisible(true)] public interface class IFoo { void Doit(); }; [Guid("417E5293-blahblah")] [ClassInterface(ClassInterfaceType::None)] [ComVisible(true)] public ref class Foo : IFoo { public: void MyDispose(); ~Foo() {MyDispose();} // This is never called !Foo() {MyDispose();} // This is called by the garbage collector. virtual ULONG Release() {MyDispose();} // This is never called virtual void Doit(); }; My code to use the object (native C++): #import "..\\Debug\\Foo.tlb" ... Bar::IFoo setup(__uuidof(Bar::Foo)); // This object comes from the .tlb. setup.Doit(); setup-Release(); // explicit release, not really necessary since Bar::IFoo's destructor will call Release(). If I put a destructor method on my COM object, it is never called. If I put a finalizer method, it is called when the garbage collector gets around to it. If I explicitly call my Release() override it is never called. I would really like it so that when my native Bar::IFoo object goes out of scope it automatically calls my .NET object's dispose code. I would think I could do it by overriding the Release(), and if the object count = 0 then call MyDispose(). But apparently I'm not overriding Release() correctly because my Release() method is never called. Obviously, I can make this happen by putting my MyDispose() method in the interface and requiring the people using my object to call MyDispose() before Release(), but it would be slicker if Release() just cleaned up the object. Is it possible to force the .NET COM object's destructor, or some other method, to be called immediately when a COM object is released? Googling on this issue gets me a lot of hits telling me to call System.Runtime.InteropServices.Marshal.ReleaseComObject(), but of course, that's how you tell .NET to release a COM object. I want COM Release() to Dispose of a .NET object.

    Read the article

  • A continued saga of C# interoprability with unmanaged C++

    - by Gilad
    After a day of banging my head against the wall both literally and metaphorically, I plead for help: I have an unmanaged C++ project, which is compiled as a DLL. Let's call it CPP Project. It currently works in an unmanaged environment. In addition, I have created a WPF project, that shall be called WPF Project. This project is a simple and currently almost empty project. It contains a single window and I want it to use code from Project 1. For that, I have created a CLR C++ project, which shall be called Interop Project and is also compiled as a DLL. For simplicity I will attach some basic testing code I have boiled down to the basics. CPP Project has the following two testing files: tester.h #pragma once extern "C" class __declspec(dllexport) NativeTester { public: void NativeTest(); }; tester.cpp #include "tester.h" void NativeTester::NativeTest() { int i = 0; } Interop Project has the following file: InteropLib.h #pragma once #include <tester.h> using namespace System; namespace InteropLib { public ref class InteropProject { public: static void Test() { NativeTester nativeTester; nativeTester.NativeTest(); } }; } Lastly, WPF Project has a single window refrencing Interop Project: MainWindow.xaml.cs using System; using System.Windows; using InteropLib; namespace AppGUI { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); InteropProject.Test(); } } } And the XAML itself has an empty window (default created). Once I am trying to run the WPF project, I get the following error: System.Windows.Markup.XamlParseException: 'The invocation of the constructor on type 'AppGUI.MainWindow' that matches the specified binding constraints threw an exception.' Line number '3' and line position '9'. --- System.IO.FileNotFoundException: Could not load file or assembly 'InteropLib.dll' or one of its dependencies. The specified module could not be found. at AppGUI.MainWindow..ctor() Interestingly enough, if I do not export the class from CPP Project, I do not get this error. Say, if i change tester.h to: #pragma once class NativeTester { public: void NativeTest() { int i = 0; } }; However, in this case I cannot use my more complex classes. If I move my implementation to a cpp file like before, I get unresolved linkage errors due to my not exporting my code. The C++ code I want to actually use is large and has many classes and is object oriented, so I can't just move all my implementation to the h files. Please help me understand this horrific error I've been trying resolve without success. Thanks.

    Read the article

  • c# interop with ghostscript

    - by yodaj007
    I'm trying to access some Ghostscript functions like so: [DllImport(@"C:\Program Files\GPLGS\gsdll32.dll", EntryPoint = "gsapi_revision")] public static extern int Foo(gsapi_revision_t x, int len); public struct gsapi_revision_t { [MarshalAs(UnmanagedType.LPTStr)] string product; [MarshalAs(UnmanagedType.LPTStr)] string copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); Foo(foo, Marshal.SizeOf(foo)); This corresponds with these definitions from the iapi.h header from ghostscript: typedef struct gsapi_revision_s { const char *product; const char *copyright; long revision; long revisiondate; } gsapi_revision_t; GSDLLEXPORT int GSDLLAPI gsapi_revision(gsapi_revision_t *pr, int len); But my code is reading nothing into the string fields. If I add 'ref' to the function, it reads gibberish. However, the following code reads in the data just fine: public struct gsapi_revision_t { IntPtr product; IntPtr copyright; long revision; long revisiondate; } public static void Main() { gsapi_revision_t foo = new gsapi_revision_t(); IntPtr x = Marshal.AllocHGlobal(20); for (int i = 0; i < 20; i++) Marshal.WriteInt32(x, i, 0); int result = Foo(x, 20); IntPtr productNamePtr = Marshal.ReadIntPtr(x); IntPtr copyrightPtr = Marshal.ReadIntPtr(x, 4); long revision = Marshal.ReadInt64(x, 8); long revisionDate = Marshal.ReadInt64(x, 12); byte[] dest = new byte[1000]; Marshal.Copy(productNamePtr, dest, 0, 1000); string name = Read(productNamePtr); string copyright = Read(copyrightPtr); } public static string Read(IntPtr p) { List<byte> bits = new List<byte>(); int i = 0; while (true) { byte b = Marshal.ReadByte(new IntPtr(p.ToInt64() + i)); if (b == 0) break; bits.Add(b); i++; } return Encoding.ASCII.GetString(bits.ToArray()); } So what am I doing wrong with marshaling?

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >