Search Results

Search found 49291 results on 1972 pages for 'method call'.

Page 260/1972 | < Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >

  • SQLBindParameter is working fine but SQLExecute gives error in Windows 2008 Server 64bit.

    - by rajugkgp
    Dear All, I am migrating my application from 32bit(Windows 2003 Server) to 64bit (Windows 2008 Server R2).I am getting the following while trying to execute a SQL command Encountered ODBC error -1: S1010, 0, [Microsoft][ODBC Driver Manager] Function sequence error . Basically internal function call is SQLExecute() function call. This works perfectly for Windows 2003 Server 32bit. I tried the command execute at the background from the command prompt and it is working. Any help or suggestion would be very much appreciated. I checked the squence of call.We have two consecutive SQLBindParameter function call and then we call SQLExecute. Is this sequence incorrect in case of 64bit? I also checked the return code given by SQLExecute which is 99. Any pointers or suggestions would be very helpful. The above sequence is working fine with 32bit Windows. Thanks in advance. -R

    Read the article

  • Where best to instantiate and close a Silverlight-enabled WCF Service from the Silverlight app?

    - by Yttrium
    When using a Silverlight-enabled WCF service, where is the best place to instantiate the service and to call the CloseAsync() method? Should you say, instantiate an instance each time you need to make a call to the service, or is it better to just instantiate an instance as a variable of the UserControl that will be making the calls? Then, where is it better to call the CloseAsync method? Should you call it in each of the "someServiceCall_completed" event methods? Or, if created as a variable of the UserControl class, is there a single place to call it? Like a Dispose method, or something equivalent for the UserControl class. Thanks, Jeff

    Read the article

  • What happens when a OSGi service using JNI is unregistered while in use?

    - by schngrg
    As I understand, OSGi services can be unregistered anytime, including when they are in use. Consider an OSGi service which internally makes a long-running JNI call. And while that JNI call is executing, the service is unregistered by OSGi. Will the JNI call be allowed to finish or terminated mid-way? What if it was just a normal non-jni long running Java call? Will that call be allowed to finish execution or will OSGi terminate everything immediately and unregister? What is the expected behavior in such a case? Does the expected behavior depend on if the service was loaded using a 'tracker' or not? SG

    Read the article

  • Calling a Sub or Function contained in a module using "CallByName" in VB/VBA

    - by Kratz
    It is easy to call a function inside a classModule using CallByName How about functions inside standard module? 'inside class module 'classModule name: clsExample Function classFunc1() MsgBox "I'm class module 1" End Function ' 'inside standard module 'Module name: module1 Function Func1() MsgBox "I'm standard module 1" End Function ' ' The main sub Sub Main() ' to call function inside class module dim clsObj as New clsExample Call CallByName(clsObj,"ClassFunc1") ' here's the question... how to call a function inside a standard module ' how to declare the object "stdObj" in reference to module1? Call CallByName(stdObj,"Func1") ' is this correct? End Sub

    Read the article

  • War deployment error related to classloading

    - by user563564
    hello when i am deploying my war file and run it it will give error like org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader Jan 6, 2011 3:16:04 PM org.apache.catalina.startup.HostConfig deployDescriptor INFO: Deploying configuration descriptor servlet.xml Jan 6, 2011 3:16:04 PM org.apache.catalina.core.StandardContext preDeregister SEVERE: error stopping LifecycleException: Pipeline has not been started at org.apache.catalina.core.StandardPipeline.stop(StandardPipeline.java:257) at org.apache.catalina.core.StandardContext.stop(StandardContext.java:4629) at org.apache.catalina.core.StandardContext.preDeregister(StandardContext.java:5370) at org.apache.tomcat.util.modeler.BaseModelMBean.preDeregister(BaseModelMBean.java:1130) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.preDeregisterInvoke(DefaultMBeanServerInterceptor.java:1048) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:421) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:403) at com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:506) at org.apache.tomcat.util.modeler.Registry.unregisterComponent(Registry.java:575) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4230) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:521) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:297) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1500) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:849) at org.apache.catalina.manager.ManagerServlet.doGet(ManagerServlet.java:351) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:199) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1555) at java.lang.Thread.run(Thread.java:619) Jan 6, 2011 3:16:05 PM org.apache.catalina.loader.WebappLoader start SEVERE: LifecycleException java.lang.ClassNotFoundException: org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at java.lang.ClassLoader.loadClass(ClassLoader.java:251) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:319) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:169) at org.apache.catalina.loader.WebappLoader.createClassLoader(WebappLoader.java:773) at org.apache.catalina.loader.WebappLoader.start(WebappLoader.java:638) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4341) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:521) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:297) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1500) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:849) at org.apache.catalina.manager.ManagerServlet.doGet(ManagerServlet.java:351) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:199) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1555) at java.lang.Thread.run(Thread.java:619) Jan 6, 2011 3:16:05 PM org.apache.catalina.core.ContainerBase addChildInternal SEVERE: ContainerBase.addChild: start: LifecycleException: start: : java.lang.ClassNotFoundException: org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader at org.apache.catalina.loader.WebappLoader.start(WebappLoader.java:679) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4341) at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:791) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:521) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:297) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1500) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:849) at org.apache.catalina.manager.ManagerServlet.doGet(ManagerServlet.java:351) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:199) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1555) at java.lang.Thread.run(Thread.java:619) Jan 6, 2011 3:16:05 PM org.apache.catalina.startup.HostConfig deployDescriptor SEVERE: Error deploying configuration descriptor servlet.xml java.lang.IllegalStateException: ContainerBase.addChild: start: LifecycleException: start: : java.lang.ClassNotFoundException: org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:795) at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:771) at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:546) at org.apache.catalina.startup.HostConfig.deployDescriptor(HostConfig.java:637) at org.apache.catalina.startup.HostConfig.deployApps(HostConfig.java:521) at org.apache.catalina.startup.HostConfig.check(HostConfig.java:1359) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tomcat.util.modeler.BaseModelMBean.invoke(BaseModelMBean.java:297) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at org.apache.catalina.manager.ManagerServlet.check(ManagerServlet.java:1500) at org.apache.catalina.manager.ManagerServlet.deploy(ManagerServlet.java:849) at org.apache.catalina.manager.ManagerServlet.doGet(ManagerServlet.java:351) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.netbeans.modules.web.monitor.server.MonitorFilter.doFilter(MonitorFilter.java:199) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:558) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11AprProcessor.process(Http11AprProcessor.java:859) at org.apache.coyote.http11.Http11AprProtocol$Http11ConnectionHandler.process(Http11AprProtocol.java:579) at org.apache.tomcat.util.net.AprEndpoint$Worker.run(AprEndpoint.java:1555) at java.lang.Thread.run(Thread.java:619) context.xml file -- <?xml version="1.0" encoding="UTF-8"?> <Context antiJARLocking="true" path="servlet"> <Loader loaderClass="org.springframework.instrument.classloading.tomcat.TomcatInstrumentableClassLoader"/> </Context> FAIL - Failed to deploy application at context path /servlet so how can i resolve it

    Read the article

  • Passing multiple POST parameters to Web API Controller Methods

    - by Rick Strahl
    ASP.NET Web API introduces a new API for creating REST APIs and making AJAX callbacks to the server. This new API provides a host of new great functionality that unifies many of the features of many of the various AJAX/REST APIs that Microsoft created before it - ASP.NET AJAX, WCF REST specifically - and combines them into a whole more consistent API. Web API addresses many of the concerns that developers had with these older APIs, namely that it was very difficult to build consistent REST style resource APIs easily. While Web API provides many new features and makes many scenarios much easier, a lot of the focus has been on making it easier to build REST compliant APIs that are focused on resource based solutions and HTTP verbs. But  RPC style calls that are common with AJAX callbacks in Web applications, have gotten a lot less focus and there are a few scenarios that are not that obvious, especially if you're expecting Web API to provide functionality similar to ASP.NET AJAX style AJAX callbacks. RPC vs. 'Proper' REST RPC style HTTP calls mimic calling a method with parameters and returning a result. Rather than mapping explicit server side resources or 'nouns' RPC calls tend simply map a server side operation, passing in parameters and receiving a typed result where parameters and result values are marshaled over HTTP. Typically RPC calls - like SOAP calls - tend to always be POST operations rather than following HTTP conventions and using the GET/POST/PUT/DELETE etc. verbs to implicitly determine what operation needs to be fired. RPC might not be considered 'cool' anymore, but for typical private AJAX backend operations of a Web site I'd wager that a large percentage of use cases of Web API will fall towards RPC style calls rather than 'proper' REST style APIs. Web applications that have needs for things like live validation against data, filling data based on user inputs, handling small UI updates often don't lend themselves very well to limited HTTP verb usage. It might not be what the cool kids do, but I don't see RPC calls getting replaced by proper REST APIs any time soon.  Proper REST has its place - for 'real' API scenarios that manage and publish/share resources, but for more transactional operations RPC seems a better choice and much easier to implement than trying to shoehorn a boatload of endpoint methods into a few HTTP verbs. In any case Web API does a good job of providing both RPC abstraction as well as the HTTP Verb/REST abstraction. RPC works well out of the box, but there are some differences especially if you're coming from ASP.NET AJAX service or WCF Rest when it comes to multiple parameters. Action Routing for RPC Style Calls If you've looked at Web API demos you've probably seen a bunch of examples of how to create HTTP Verb based routing endpoints. Verb based routing essentially maps a controller and then uses HTTP verbs to map the methods that are called in response to HTTP requests. This works great for resource APIs but doesn't work so well when you have many operational methods in a single controller. HTTP Verb routing is limited to the few HTTP verbs available (plus separate method signatures) and - worse than that - you can't easily extend the controller with custom routes or action routing beyond that. Thankfully Web API also supports Action based routing which allows you create RPC style endpoints fairly easily:RouteTable.Routes.MapHttpRoute( name: "AlbumRpcApiAction", routeTemplate: "albums/{action}/{title}", defaults: new { title = RouteParameter.Optional, controller = "AlbumApi", action = "GetAblums" } ); This uses traditional MVC style {action} method routing which is different from the HTTP verb based routing you might have read a bunch about in conjunction with Web API. Action based routing like above lets you specify an end point method in a Web API controller either via the {action} parameter in the route string or via a default value for custom routes. Using routing you can pass multiple parameters either on the route itself or pass parameters on the query string, via ModelBinding or content value binding. For most common scenarios this actually works very well. As long as you are passing either a single complex type via a POST operation, or multiple simple types via query string or POST buffer, there's no issue. But if you need to pass multiple parameters as was easily done with WCF REST or ASP.NET AJAX things are not so obvious. Web API has no issue allowing for single parameter like this:[HttpPost] public string PostAlbum(Album album) { return String.Format("{0} {1:d}", album.AlbumName, album.Entered); } There are actually two ways to call this endpoint: albums/PostAlbum Using the Model Binder with plain POST values In this mechanism you're sending plain urlencoded POST values to the server which the ModelBinder then maps the parameter. Each property value is matched to each matching POST value. This works similar to the way that MVC's  ModelBinder works. Here's how you can POST using the ModelBinder and jQuery:$.ajax( { url: "albums/PostAlbum", type: "POST", data: { AlbumName: "Dirty Deeds", Entered: "5/1/2012" }, success: function (result) { alert(result); }, error: function (xhr, status, p3, p4) { var err = "Error " + " " + status + " " + p3; if (xhr.responseText && xhr.responseText[0] == "{") err = JSON.parse(xhr.responseText).message; alert(err); } }); Here's what the POST data looks like for this request: The model binder and it's straight form based POST mechanism is great for posting data directly from HTML pages to model objects. It avoids having to do manual conversions for many operations and is a great boon for AJAX callback requests. Using Web API JSON Formatter The other option is to post data using a JSON string. The process for this is similar except that you create a JavaScript object and serialize it to JSON first.album = { AlbumName: "PowerAge", Entered: new Date(1977,0,1) } $.ajax( { url: "albums/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify(album), success: function (result) { alert(result); } }); Here the data is sent using a JSON object rather than form data and the data is JSON encoded over the wire. The trace reveals that the data is sent using plain JSON (Source above), which is a little more efficient since there's no UrlEncoding that occurs. BTW, notice that WebAPI automatically deals with the date. I provided the date as a plain string, rather than a JavaScript date value and the Formatter and ModelBinder both automatically map the date propertly to the Entered DateTime property of the Album object. Passing multiple Parameters to a Web API Controller Single parameters work fine in either of these RPC scenarios and that's to be expected. ModelBinding always works against a single object because it maps a model. But what happens when you want to pass multiple parameters? Consider an API Controller method that has a signature like the following:[HttpPost] public string PostAlbum(Album album, string userToken) Here I'm asking to pass two objects to an RPC method. Is that possible? This used to be fairly straight forward either with WCF REST and ASP.NET AJAX ASMX services, but as far as I can tell this is not directly possible using a POST operation with WebAPI. There a few workarounds that you can use to make this work: Use both POST *and* QueryString Parameters in Conjunction If you have both complex and simple parameters, you can pass simple parameters on the query string. The above would actually work with: /album/PostAlbum?userToken=sekkritt but that's not always possible. In this example it might not be a good idea to pass a user token on the query string though. It also won't work if you need to pass multiple complex objects, since query string values do not support complex type mapping. They only work with simple types. Use a single Object that wraps the two Parameters If you go by service based architecture guidelines every service method should always pass and return a single value only. The input should wrap potentially multiple input parameters and the output should convey status as well as provide the result value. You typically have a xxxRequest and a xxxResponse class that wraps the inputs and outputs. Here's what this method might look like:public PostAlbumResponse PostAlbum(PostAlbumRequest request) { var album = request.Album; var userToken = request.UserToken; return new PostAlbumResponse() { IsSuccess = true, Result = String.Format("{0} {1:d} {2}", album.AlbumName, album.Entered,userToken) }; } with these support types:public class PostAlbumRequest { public Album Album { get; set; } public User User { get; set; } public string UserToken { get; set; } } public class PostAlbumResponse { public string Result { get; set; } public bool IsSuccess { get; set; } public string ErrorMessage { get; set; } }   To call this method you now have to assemble these objects on the client and send it up as JSON:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result.Result); } }); I assemble the individual types first and then combine them in the data: property of the $.ajax() call into the actual object passed to the server, that mimics the structure of PostAlbumRequest server class that has Album, User and UserToken properties. This works well enough but it gets tedious if you have to create Request and Response types for each method signature. If you have common parameters that are always passed (like you always pass an album or usertoken) you might be able to abstract this to use a single object that gets reused for all methods, but this gets confusing too: Overload a single 'parameter' too much and it becomes a nightmare to decipher what your method actual can use. Use JObject to parse multiple Property Values out of an Object If you recall, ASP.NET AJAX and WCF REST used a 'wrapper' object to make default AJAX calls. Rather than directly calling a service you always passed an object which contained properties for each parameter: { parm1: Value, parm2: Value2 } WCF REST/ASP.NET AJAX would then parse this top level property values and map them to the parameters of the endpoint method. This automatic type wrapping functionality is no longer available directly in Web API, but since Web API now uses JSON.NET for it's JSON serializer you can actually simulate that behavior with a little extra code. You can use the JObject class to receive a dynamic JSON result and then using the dynamic cast of JObject to walk through the child objects and even parse them into strongly typed objects. Here's how to do this on the API Controller end:[HttpPost] public string PostAlbum(JObject jsonData) { dynamic json = jsonData; JObject jalbum = json.Album; JObject juser = json.User; string token = json.UserToken; var album = jalbum.ToObject<Album>(); var user = juser.ToObject<User>(); return String.Format("{0} {1} {2}", album.AlbumName, user.Name, token); } This is clearly not as nice as having the parameters passed directly, but it works to allow you to pass multiple parameters and access them using Web API. JObject is JSON.NET's generic object container which sports a nice dynamic interface that allows you to walk through the object's properties using standard 'dot' object syntax. All you have to do is cast the object to dynamic to get access to the property interface of the JSON type. Additionally JObject also allows you to parse JObject instances into strongly typed objects, which enables us here to retrieve the two objects passed as parameters from this jquery code:var album = { AlbumName: "PowerAge", Entered: "1/1/1977" } var user = { Name: "Rick" } var userToken = "sekkritt"; $.ajax( { url: "samples/PostAlbum", type: "POST", contentType: "application/json", data: JSON.stringify({ Album: album, User: user, UserToken: userToken }), success: function (result) { alert(result); } }); Summary ASP.NET Web API brings many new features and many advantages over the older Microsoft AJAX and REST APIs, but realize that some things like passing multiple strongly typed object parameters will work a bit differently. It's not insurmountable, but just knowing what options are available to simulate this behavior is good to know. Now let me say here that it's probably not a good practice to pass a bunch of parameters to an API call. Ideally APIs should be closely factored to accept single parameters or a single content parameter at least along with some identifier parameters that can be passed on the querystring. But saying that doesn't mean that occasionally you don't run into a situation where you have the need to pass several objects to the server and all three of the options I mentioned might have merit in different situations. For now I'm sure the question of how to pass multiple parameters will come up quite a bit from people migrating WCF REST or ASP.NET AJAX code to Web API. At least there are options available to make it work.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • What are ways to prevent files with the Right-to-Left Override Unicode character in their name (a malware spoofing method) from being written or read?

    - by galacticninja
    What are ways to avoid or prevent files with the RLO (Right-to-Left Override) Unicode character in their name (a malware method to spoof filenames) from being written or read in a Windows PC? More info on the RLO unicode character here: http://www.fileformat.info/info/unicode/char/202e/index.htm http://en.wikipedia.org/wiki/Bi-directional_text Info on the RLO unicode character when used by malware: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html Mirror link: http://webcache.googleusercontent.com/search?q=cache:KasmfOvbVJ8J:www.ipa.jp/security/english/virus/press/201110/E_PR201110.html+&cd=1&hl=en&ct=clnk You can try this RLO character test webpage: http://www.fileformat.info/info/unicode/char/202e/browsertest.htm The RLO character is also already pasted in the 'Input Test' field in that webpage. Try typing there and notice that the characters you're typing are coming out in their reverse orders (right-to-left, instead of left-to-right). In filenames, the RLO character can be specifically positioned in the filename to spoof or masquerade as having a filename or file extension that is different than what it actually has. (Will still be hidden even if 'Hide extensions for known filetypes' is unchecked.) The only info I can find that has info on how to prevent files with the RLO character from being run is from the Information Technology Promotion Agency, Japan website: http://www.ipa.jp/security/english/virus/press/201110/E_PR201110.html (Mirror link). They adviced to use the Local Security Policy settings manager to block files with the RLO character in its name from being run. Can anyone recommend any other good solutions to prevent files with the RLO character in their names from being written or being read in the computer, or a way to alert the user if a file with the RLO character is detected? My OS is Windows 7, but I'll be looking for solutions for Windows XP, Vista and 7, or a solution that will work for all those OSes, to help people using those OSes too.

    Read the article

  • The Incremental Architect&rsquo;s Napkin - #5 - Design functions for extensibility and readability

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/08/24/the-incremental-architectrsquos-napkin---5---design-functions-for.aspx The functionality of programs is entered via Entry Points. So what we´re talking about when designing software is a bunch of functions handling the requests represented by and flowing in through those Entry Points. Designing software thus consists of at least three phases: Analyzing the requirements to find the Entry Points and their signatures Designing the functionality to be executed when those Entry Points get triggered Implementing the functionality according to the design aka coding I presume, you´re familiar with phase 1 in some way. And I guess you´re proficient in implementing functionality in some programming language. But in my experience developers in general are not experienced in going through an explicit phase 2. “Designing functionality? What´s that supposed to mean?” you might already have thought. Here´s my definition: To design functionality (or functional design for short) means thinking about… well, functions. You find a solution for what´s supposed to happen when an Entry Point gets triggered in terms of functions. A conceptual solution that is, because those functions only exist in your head (or on paper) during this phase. But you may have guess that, because it´s “design” not “coding”. And here is, what functional design is not: It´s not about logic. Logic is expressions (e.g. +, -, && etc.) and control statements (e.g. if, switch, for, while etc.). Also I consider calling external APIs as logic. It´s equally basic. It´s what code needs to do in order to deliver some functionality or quality. Logic is what´s doing that needs to be done by software. Transformations are either done through expressions or API-calls. And then there is alternative control flow depending on the result of some expression. Basically it´s just jumps in Assembler, sometimes to go forward (if, switch), sometimes to go backward (for, while, do). But calling your own function is not logic. It´s not necessary to produce any outcome. Functionality is not enhanced by adding functions (subroutine calls) to your code. Nor is quality increased by adding functions. No performance gain, no higher scalability etc. through functions. Functions are not relevant to functionality. Strange, isn´t it. What they are important for is security of investment. By introducing functions into our code we can become more productive (re-use) and can increase evolvability (higher unterstandability, easier to keep code consistent). That´s no small feat, however. Evolvable code can hardly be overestimated. That´s why to me functional design is so important. It´s at the core of software development. To sum this up: Functional design is on a level of abstraction above (!) logical design or algorithmic design. Functional design is only done until you get to a point where each function is so simple you are very confident you can easily code it. Functional design an logical design (which mostly is coding, but can also be done using pseudo code or flow charts) are complementary. Software needs both. If you start coding right away you end up in a tangled mess very quickly. Then you need back out through refactoring. Functional design on the other hand is bloodless without actual code. It´s just a theory with no experiments to prove it. But how to do functional design? An example of functional design Let´s assume a program to de-duplicate strings. The user enters a number of strings separated by commas, e.g. a, b, a, c, d, b, e, c, a. And the program is supposed to clear this list of all doubles, e.g. a, b, c, d, e. There is only one Entry Point to this program: the user triggers the de-duplication by starting the program with the string list on the command line C:\>deduplicate "a, b, a, c, d, b, e, c, a" a, b, c, d, e …or by clicking on a GUI button. This leads to the Entry Point function to get called. It´s the program´s main function in case of the batch version or a button click event handler in the GUI version. That´s the physical Entry Point so to speak. It´s inevitable. What then happens is a three step process: Transform the input data from the user into a request. Call the request handler. Transform the output of the request handler into a tangible result for the user. Or to phrase it a bit more generally: Accept input. Transform input into output. Present output. This does not mean any of these steps requires a lot of effort. Maybe it´s just one line of code to accomplish it. Nevertheless it´s a distinct step in doing the processing behind an Entry Point. Call it an aspect or a responsibility - and you will realize it most likely deserves a function of its own to satisfy the Single Responsibility Principle (SRP). Interestingly the above list of steps is already functional design. There is no logic, but nevertheless the solution is described - albeit on a higher level of abstraction than you might have done yourself. But it´s still on a meta-level. The application to the domain at hand is easy, though: Accept string list from command line De-duplicate Present de-duplicated strings on standard output And this concrete list of processing steps can easily be transformed into code:static void Main(string[] args) { var input = Accept_string_list(args); var output = Deduplicate(input); Present_deduplicated_string_list(output); } Instead of a big problem there are three much smaller problems now. If you think each of those is trivial to implement, then go for it. You can stop the functional design at this point. But maybe, just maybe, you´re not so sure how to go about with the de-duplication for example. Then just implement what´s easy right now, e.g.private static string Accept_string_list(string[] args) { return args[0]; } private static void Present_deduplicated_string_list( string[] output) { var line = string.Join(", ", output); Console.WriteLine(line); } Accept_string_list() contains logic in the form of an API-call. Present_deduplicated_string_list() contains logic in the form of an expression and an API-call. And then repeat the functional design for the remaining processing step. What´s left is the domain logic: de-duplicating a list of strings. How should that be done? Without any logic at our disposal during functional design you´re left with just functions. So which functions could make up the de-duplication? Here´s a suggestion: De-duplicate Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Processing step 2 obviously was the core of the solution. That´s where real creativity was needed. That´s the core of the domain. But now after this refinement the implementation of each step is easy again:private static string[] Parse_string_list(string input) { return input.Split(',') .Select(s => s.Trim()) .ToArray(); } private static Dictionary<string,object> Compile_unique_strings(string[] strings) { return strings.Aggregate( new Dictionary<string, object>(), (agg, s) => { agg[s] = null; return agg; }); } private static string[] Serialize_unique_strings( Dictionary<string,object> dict) { return dict.Keys.ToArray(); } With these three additional functions Main() now looks like this:static void Main(string[] args) { var input = Accept_string_list(args); var strings = Parse_string_list(input); var dict = Compile_unique_strings(strings); var output = Serialize_unique_strings(dict); Present_deduplicated_string_list(output); } I think that´s very understandable code: just read it from top to bottom and you know how the solution to the problem works. It´s a mirror image of the initial design: Accept string list from command line Parse the input string into a true list of strings. Register each string in a dictionary/map/set. That way duplicates get cast away. Transform the data structure into a list of unique strings. Present de-duplicated strings on standard output You can even re-generate the design by just looking at the code. Code and functional design thus are always in sync - if you follow some simple rules. But about that later. And as a bonus: all the functions making up the process are small - which means easy to understand, too. So much for an initial concrete example. Now it´s time for some theory. Because there is method to this madness ;-) The above has only scratched the surface. Introducing Flow Design Functional design starts with a given function, the Entry Point. Its goal is to describe the behavior of the program when the Entry Point is triggered using a process, not an algorithm. An algorithm consists of logic, a process on the other hand consists just of steps or stages. Each processing step transforms input into output or a side effect. Also it might access resources, e.g. a printer, a database, or just memory. Processing steps thus can rely on state of some sort. This is different from Functional Programming, where functions are supposed to not be stateful and not cause side effects.[1] In its simplest form a process can be written as a bullet point list of steps, e.g. Get data from user Output result to user Transform data Parse data Map result for output Such a compilation of steps - possibly on different levels of abstraction - often is the first artifact of functional design. It can be generated by a team in an initial design brainstorming. Next comes ordering the steps. What should happen first, what next etc.? Get data from user Parse data Transform data Map result for output Output result to user That´s great for a start into functional design. It´s better than starting to code right away on a given function using TDD. Please get me right: TDD is a valuable practice. But it can be unnecessarily hard if the scope of a functionn is too large. But how do you know beforehand without investing some thinking? And how to do this thinking in a systematic fashion? My recommendation: For any given function you´re supposed to implement first do a functional design. Then, once you´re confident you know the processing steps - which are pretty small - refine and code them using TDD. You´ll see that´s much, much easier - and leads to cleaner code right away. For more information on this approach I call “Informed TDD” read my book of the same title. Thinking before coding is smart. And writing down the solution as a bunch of functions possibly is the simplest thing you can do, I´d say. It´s more according to the KISS (Keep It Simple, Stupid) principle than returning constants or other trivial stuff TDD development often is started with. So far so good. A simple ordered list of processing steps will do to start with functional design. As shown in the above example such steps can easily be translated into functions. Moving from design to coding thus is simple. However, such a list does not scale. Processing is not always that simple to be captured in a list. And then the list is just text. Again. Like code. That means the design is lacking visuality. Textual representations need more parsing by your brain than visual representations. Plus they are limited in their “dimensionality”: text just has one dimension, it´s sequential. Alternatives and parallelism are hard to encode in text. In addition the functional design using numbered lists lacks data. It´s not visible what´s the input, output, and state of the processing steps. That´s why functional design should be done using a lightweight visual notation. No tool is necessary to draw such designs. Use pen and paper; a flipchart, a whiteboard, or even a napkin is sufficient. Visualizing processes The building block of the functional design notation is a functional unit. I mostly draw it like this: Something is done, it´s clear what goes in, it´s clear what comes out, and it´s clear what the processing step requires in terms of state or hardware. Whenever input flows into a functional unit it gets processed and output is produced and/or a side effect occurs. Flowing data is the driver of something happening. That´s why I call this approach to functional design Flow Design. It´s about data flow instead of control flow. Control flow like in algorithms is of no concern to functional design. Thinking about control flow simply is too low level. Once you start with control flow you easily get bogged down by tons of details. That´s what you want to avoid during design. Design is supposed to be quick, broad brush, abstract. It should give overview. But what about all the details? As Robert C. Martin rightly said: “Programming is abot detail”. Detail is a matter of code. Once you start coding the processing steps you designed you can worry about all the detail you want. Functional design does not eliminate all the nitty gritty. It just postpones tackling them. To me that´s also an example of the SRP. Function design has the responsibility to come up with a solution to a problem posed by a single function (Entry Point). And later coding has the responsibility to implement the solution down to the last detail (i.e. statement, API-call). TDD unfortunately mixes both responsibilities. It´s just coding - and thereby trying to find detailed implementations (green phase) plus getting the design right (refactoring). To me that´s one reason why TDD has failed to deliver on its promise for many developers. Using functional units as building blocks of functional design processes can be depicted very easily. Here´s the initial process for the example problem: For each processing step draw a functional unit and label it. Choose a verb or an “action phrase” as a label, not a noun. Functional design is about activities, not state or structure. Then make the output of an upstream step the input of a downstream step. Finally think about the data that should flow between the functional units. Write the data above the arrows connecting the functional units in the direction of the data flow. Enclose the data description in brackets. That way you can clearly see if all flows have already been specified. Empty brackets mean “no data is flowing”, but nevertheless a signal is sent. A name like “list” or “strings” in brackets describes the data content. Use lower case labels for that purpose. A name starting with an upper case letter like “String” or “Customer” on the other hand signifies a data type. If you like, you also can combine descriptions with data types by separating them with a colon, e.g. (list:string) or (strings:string[]). But these are just suggestions from my practice with Flow Design. You can do it differently, if you like. Just be sure to be consistent. Flows wired-up in this manner I call one-dimensional (1D). Each functional unit just has one input and/or one output. A functional unit without an output is possible. It´s like a black hole sucking up input without producing any output. Instead it produces side effects. A functional unit without an input, though, does make much sense. When should it start to work? What´s the trigger? That´s why in the above process even the first processing step has an input. If you like, view such 1D-flows as pipelines. Data is flowing through them from left to right. But as you can see, it´s not always the same data. It get´s transformed along its passage: (args) becomes a (list) which is turned into (strings). The Principle of Mutual Oblivion A very characteristic trait of flows put together from function units is: no functional units knows another one. They are all completely independent of each other. Functional units don´t know where their input is coming from (or even when it´s gonna arrive). They just specify a range of values they can process. And they promise a certain behavior upon input arriving. Also they don´t know where their output is going. They just produce it in their own time independent of other functional units. That means at least conceptually all functional units work in parallel. Functional units don´t know their “deployment context”. They now nothing about the overall flow they are place in. They are just consuming input from some upstream, and producing output for some downstream. That makes functional units very easy to test. At least as long as they don´t depend on state or resources. I call this the Principle of Mutual Oblivion (PoMO). Functional units are oblivious of others as well as an overall context/purpose. They are just parts of a whole focused on a single responsibility. How the whole is built, how a larger goal is achieved, is of no concern to the single functional units. By building software in such a manner, functional design interestingly follows nature. Nature´s building blocks for organisms also follow the PoMO. The cells forming your body do not know each other. Take a nerve cell “controlling” a muscle cell for example:[2] The nerve cell does not know anything about muscle cells, let alone the specific muscel cell it is “attached to”. Likewise the muscle cell does not know anything about nerve cells, let a lone a specific nerve cell “attached to” it. Saying “the nerve cell is controlling the muscle cell” thus only makes sense when viewing both from the outside. “Control” is a concept of the whole, not of its parts. Control is created by wiring-up parts in a certain way. Both cells are mutually oblivious. Both just follow a contract. One produces Acetylcholine (ACh) as output, the other consumes ACh as input. Where the ACh is going, where it´s coming from neither cell cares about. Million years of evolution have led to this kind of division of labor. And million years of evolution have produced organism designs (DNA) which lead to the production of these different cell types (and many others) and also to their co-location. The result: the overall behavior of an organism. How and why this happened in nature is a mystery. For our software, though, it´s clear: functional and quality requirements needs to be fulfilled. So we as developers have to become “intelligent designers” of “software cells” which we put together to form a “software organism” which responds in satisfying ways to triggers from it´s environment. My bet is: If nature gets complex organisms working by following the PoMO, who are we to not apply this recipe for success to our much simpler “machines”? So my rule is: Wherever there is functionality to be delivered, because there is a clear Entry Point into software, design the functionality like nature would do it. Build it from mutually oblivious functional units. That´s what Flow Design is about. In that way it´s even universal, I´d say. Its notation can also be applied to biology: Never mind labeling the functional units with nouns. That´s ok in Flow Design. You´ll do that occassionally for functional units on a higher level of abstraction or when their purpose is close to hardware. Getting a cockroach to roam your bedroom takes 1,000,000 nerve cells (neurons). Getting the de-duplication program to do its job just takes 5 “software cells” (functional units). Both, though, follow the same basic principle. Translating functional units into code Moving from functional design to code is no rocket science. In fact it´s straightforward. There are two simple rules: Translate an input port to a function. Translate an output port either to a return statement in that function or to a function pointer visible to that function. The simplest translation of a functional unit is a function. That´s what you saw in the above example. Functions are mutually oblivious. That why Functional Programming likes them so much. It makes them composable. Which is the reason, nature works according to the PoMO. Let´s be clear about one thing: There is no dependency injection in nature. For all of an organism´s complexity no DI container is used. Behavior is the result of smooth cooperation between mutually oblivious building blocks. Functions will often be the adequate translation for the functional units in your designs. But not always. Take for example the case, where a processing step should not always produce an output. Maybe the purpose is to filter input. Here the functional unit consumes words and produces words. But it does not pass along every word flowing in. Some words are swallowed. Think of a spell checker. It probably should not check acronyms for correctness. There are too many of them. Or words with no more than two letters. Such words are called “stop words”. In the above picture the optionality of the output is signified by the astrisk outside the brackets. It means: Any number of (word) data items can flow from the functional unit for each input data item. It might be none or one or even more. This I call a stream of data. Such behavior cannot be translated into a function where output is generated with return. Because a function always needs to return a value. So the output port is translated into a function pointer or continuation which gets passed to the subroutine when called:[3]void filter_stop_words( string word, Action<string> onNoStopWord) { if (...check if not a stop word...) onNoStopWord(word); } If you want to be nitpicky you might call such a function pointer parameter an injection. And technically you´re right. Conceptually, though, it´s not an injection. Because the subroutine is not functionally dependent on the continuation. Firstly continuations are procedures, i.e. subroutines without a return type. Remember: Flow Design is about unidirectional data flow. Secondly the name of the formal parameter is chosen in a way as to not assume anything about downstream processing steps. onNoStopWord describes a situation (or event) within the functional unit only. Translating output ports into function pointers helps keeping functional units mutually oblivious in cases where output is optional or produced asynchronically. Either pass the function pointer to the function upon call. Or make it global by putting it on the encompassing class. Then it´s called an event. In C# that´s even an explicit feature.class Filter { public void filter_stop_words( string word) { if (...check if not a stop word...) onNoStopWord(word); } public event Action<string> onNoStopWord; } When to use a continuation and when to use an event dependens on how a functional unit is used in flows and how it´s packed together with others into classes. You´ll see examples further down the Flow Design road. Another example of 1D functional design Let´s see Flow Design once more in action using the visual notation. How about the famous word wrap kata? Robert C. Martin has posted a much cited solution including an extensive reasoning behind his TDD approach. So maybe you want to compare it to Flow Design. The function signature given is:string WordWrap(string text, int maxLineLength) {...} That´s not an Entry Point since we don´t see an application with an environment and users. Nevertheless it´s a function which is supposed to provide a certain functionality. The text passed in has to be reformatted. The input is a single line of arbitrary length consisting of words separated by spaces. The output should consist of one or more lines of a maximum length specified. If a word is longer than a the maximum line length it can be split in multiple parts each fitting in a line. Flow Design Let´s start by brainstorming the process to accomplish the feat of reformatting the text. What´s needed? Words need to be assembled into lines Words need to be extracted from the input text The resulting lines need to be assembled into the output text Words too long to fit in a line need to be split Does sound about right? I guess so. And it shows a kind of priority. Long words are a special case. So maybe there is a hint for an incremental design here. First let´s tackle “average words” (words not longer than a line). Here´s the Flow Design for this increment: The the first three bullet points turned into functional units with explicit data added. As the signature requires a text is transformed into another text. See the input of the first functional unit and the output of the last functional unit. In between no text flows, but words and lines. That´s good to see because thereby the domain is clearly represented in the design. The requirements are talking about words and lines and here they are. But note the asterisk! It´s not outside the brackets but inside. That means it´s not a stream of words or lines, but lists or sequences. For each text a sequence of words is output. For each sequence of words a sequence of lines is produced. The asterisk is used to abstract from the concrete implementation. Like with streams. Whether the list of words gets implemented as an array or an IEnumerable is not important during design. It´s an implementation detail. Does any processing step require further refinement? I don´t think so. They all look pretty “atomic” to me. And if not… I can always backtrack and refine a process step using functional design later once I´ve gained more insight into a sub-problem. Implementation The implementation is straightforward as you can imagine. The processing steps can all be translated into functions. Each can be tested easily and separately. Each has a focused responsibility. And the process flow becomes just a sequence of function calls: Easy to understand. It clearly states how word wrapping works - on a high level of abstraction. And it´s easy to evolve as you´ll see. Flow Design - Increment 2 So far only texts consisting of “average words” are wrapped correctly. Words not fitting in a line will result in lines too long. Wrapping long words is a feature of the requested functionality. Whether it´s there or not makes a difference to the user. To quickly get feedback I decided to first implement a solution without this feature. But now it´s time to add it to deliver the full scope. Fortunately Flow Design automatically leads to code following the Open Closed Principle (OCP). It´s easy to extend it - instead of changing well tested code. How´s that possible? Flow Design allows for extension of functionality by inserting functional units into the flow. That way existing functional units need not be changed. The data flow arrow between functional units is a natural extension point. No need to resort to the Strategy Pattern. No need to think ahead where extions might need to be made in the future. I just “phase in” the remaining processing step: Since neither Extract words nor Reformat know of their environment neither needs to be touched due to the “detour”. The new processing step accepts the output of the existing upstream step and produces data compatible with the existing downstream step. Implementation - Increment 2 A trivial implementation checking the assumption if this works does not do anything to split long words. The input is just passed on: Note how clean WordWrap() stays. The solution is easy to understand. A developer looking at this code sometime in the future, when a new feature needs to be build in, quickly sees how long words are dealt with. Compare this to Robert C. Martin´s solution:[4] How does this solution handle long words? Long words are not even part of the domain language present in the code. At least I need considerable time to understand the approach. Admittedly the Flow Design solution with the full implementation of long word splitting is longer than Robert C. Martin´s. At least it seems. Because his solution does not cover all the “word wrap situations” the Flow Design solution handles. Some lines would need to be added to be on par, I guess. But even then… Is a difference in LOC that important as long as it´s in the same ball park? I value understandability and openness for extension higher than saving on the last line of code. Simplicity is not just less code, it´s also clarity in design. But don´t take my word for it. Try Flow Design on larger problems and compare for yourself. What´s the easier, more straightforward way to clean code? And keep in mind: You ain´t seen all yet ;-) There´s more to Flow Design than described in this chapter. In closing I hope I was able to give you a impression of functional design that makes you hungry for more. To me it´s an inevitable step in software development. Jumping from requirements to code does not scale. And it leads to dirty code all to quickly. Some thought should be invested first. Where there is a clear Entry Point visible, it´s functionality should be designed using data flows. Because with data flows abstraction is possible. For more background on why that´s necessary read my blog article here. For now let me point out to you - if you haven´t already noticed - that Flow Design is a general purpose declarative language. It´s “programming by intention” (Shalloway et al.). Just write down how you think the solution should work on a high level of abstraction. This breaks down a large problem in smaller problems. And by following the PoMO the solutions to those smaller problems are independent of each other. So they are easy to test. Or you could even think about getting them implemented in parallel by different team members. Flow Design not only increases evolvability, but also helps becoming more productive. All team members can participate in functional design. This goes beyon collective code ownership. We´re talking collective design/architecture ownership. Because with Flow Design there is a common visual language to talk about functional design - which is the foundation for all other design activities.   PS: If you like what you read, consider getting my ebook “The Incremental Architekt´s Napkin”. It´s where I compile all the articles in this series for easier reading. I like the strictness of Function Programming - but I also find it quite hard to live by. And it certainly is not what millions of programmers are used to. Also to me it seems, the real world is full of state and side effects. So why give them such a bad image? That´s why functional design takes a more pragmatic approach. State and side effects are ok for processing steps - but be sure to follow the SRP. Don´t put too much of it into a single processing step. ? Image taken from www.physioweb.org ? My code samples are written in C#. C# sports typed function pointers called delegates. Action is such a function pointer type matching functions with signature void someName(T t). Other languages provide similar ways to work with functions as first class citizens - even Java now in version 8. I trust you find a way to map this detail of my translation to your favorite programming language. I know it works for Java, C++, Ruby, JavaScript, Python, Go. And if you´re using a Functional Programming language it´s of course a no brainer. ? Taken from his blog post “The Craftsman 62, The Dark Path”. ?

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • Gnome Shell Theme Problem on Ubuntu 11.10

    - by Khurram Majeed
    I am trying to install ANewStart GNOME shell themes on Ubuntu 11.10. I have installed gnome shell extension for themes: sudo add-apt-repository ppa:webupd8team/gnome3 sudo apt-get update sudo apt-get install gnome-shell-extensions-user-theme I got the instructions from here ANewStart GNOME Shell Theme + AwOken Icons Theme = Pure Art. But when I go to "Advanced Settings - Shell Extensions" its empty... There is nothing. Also there is a orange triangle sign next to Shell Theme drop down in Advanced Settings - Theme. When I try to run the gnome-tweak-tool from terminal I get following error: imresh@imresh-laptop:~$ gnome-tweak-tool CRITICAL: Error parsing schema org.gnome.shell (/usr/share/glib-2.0/schemas/org.gnome.shell.gschema.xml) Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/gsettings.py", line 45, in __init__ summary = key.getElementsByTagName("summary")[0].childNodes[0].data IndexError: list index out of range WARNING : Error detecting shell Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell_extensions.py", line 145, in __init__ shell = GnomeShellFactory().get_shell() File "/usr/lib/python2.7/dist-packages/gtweak/utils.py", line 38, in getinstance instances[cls] = cls() File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 123, in __init__ v = map(int,proxy.version.split(".")) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 46, in version return json.loads(self.execute_js('const Config = imports.misc.config; Config.PACKAGE_VERSION')) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 39, in execute_js result, output = self.proxy.Eval('(s)', js) File "/usr/lib/python2.7/dist-packages/gi/overrides/Gio.py", line 148, in __call__ kwargs.get('flags', 0), kwargs.get('timeout', -1), None) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) GError: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.Shell was not provided by any .service files WARNING : Shell not running Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell.py", line 57, in __init__ self._shell = GnomeShellFactory().get_shell() File "/usr/lib/python2.7/dist-packages/gtweak/utils.py", line 38, in getinstance instances[cls] = cls() File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 123, in __init__ v = map(int,proxy.version.split(".")) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 46, in version return json.loads(self.execute_js('const Config = imports.misc.config; Config.PACKAGE_VERSION')) File "/usr/lib/python2.7/dist-packages/gtweak/gshellwrapper.py", line 39, in execute_js result, output = self.proxy.Eval('(s)', js) File "/usr/lib/python2.7/dist-packages/gi/overrides/Gio.py", line 148, in __call__ kwargs.get('flags', 0), kwargs.get('timeout', -1), None) File "/usr/lib/python2.7/dist-packages/gi/types.py", line 43, in function return info.invoke(*args, **kwargs) GError: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.Shell was not provided by any .service files WARNING : Could not list shell extensions Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/gtweak/tweaks/tweak_shell.py", line 62, in __init__ extensions = self._shell.list_extensions() AttributeError: ShellThemeTweak instance has no attribute '_shell' (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed (gnome-tweak-tool:5323): Gtk-CRITICAL **: gtk_widget_get_preferred_height_for_width: assertion `width >= 0' failed Please help me in fixing this. I have also restarted the computer many times it does not make a difference.

    Read the article

  • DirectX particle system. ConstantBuffer

    - by Liuka
    I'm new in DirectX and I'm making a 2D game. I want to use a particle system to simulate a 3D starfield, so each star has to set its own constant buffer for the vertexshader es. to set it's world matrix. So if i have 500 stars (that move every frame) i need to call 500 times VSsetconstantbuffer, and map/unmap each buffer. with 500 stars i have an average of 220 fps and that's quite good. My bottelneck is Vs/PsSetconstantbuffer. If i dont call this function i have 400 fps(obliviously nothing is display, since i dont set the position of the stars). So is there a method to speed up the render of the particle system?? Ps. I'm using intel integrate graphic (hd 2000-3000). with a nvidia (or amd) gpu will i have the same bottleneck?? If, for example, i dont call setshaderresource i have 10-20 fps more (for 500 objcets), that is not 180.Why does SetConstantBuffer take so long?? LPVOID VSdataPtr = VSmappedResource.pData; memcpy(VSdataPtr, VSdata, CszVSdata); context->Unmap(VertexBuffer, 0); result = context->Map(PixelBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &PSmappedResource); if (FAILED(result)) { outputResult.OutputErrorMessage(TITLE, L"Cannot map the PixelBuffer", &result, OUTPUT_ERROR_FILE); return; } LPVOID PSdataPtr = PSmappedResource.pData; memcpy(PSdataPtr, PSdata, CszPSdata); context->Unmap(PixelBuffer, 0); context->VSSetConstantBuffers(0, 1, &VertexBuffer); context->PSSetConstantBuffers(0, 1, &PixelBuffer); this update and set the buffer. It's part of the render method of a sprite class that contains a the vertex buffer and the texture to apply to the quads(it's a 2d game) too. I have an array of 500 stars (sprite setup with a star texture). Every frame: clear back buffer; draw the array of stars; present the backbuffer; draw also call the function update( which calculate the position of the sprite on screen based on a "camera class") Ok, create a vertex buffer with the vertices of each quads(stars) seems to be good, since the stars don't change their "virtual" position; so.... In a particle system (where particles move) it's better to have all the object in only one vertices array, rather then an array of different sprite/object in order to update all the vertices' position with a single setbuffer call. In this case i have to use a dynamic vertex buffer with the vertices positions like this: verticesForQuad={{ XMFLOAT3((float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 0.0f) }, { XMFLOAT3((float)halfDImensions.x-1+pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(1.0f, 1.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1+pos.x, (float)halfDImensions.y-1.pos.y, 1.0f), XMFLOAT2(0.0f, 0.0f) }, { XMFLOAT3(-(float)halfDImensions.x-1.pos.x, -(float)halfDImensions.y-1+pos.y, 1.0f), XMFLOAT2(0.0f, 1.0f) }, ....other quads} where halfDimensions is the halfsize in pixel of a texture and pos the virtual position of a star. than create an array of verticesForQuad and create the vertex buffer ZeroMemory(&vertexDesc, sizeof(vertexDesc)); vertexDesc.Usage = D3D11_USAGE_DEFAULT; vertexDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexDesc.ByteWidth = sizeof(VertexType)* 4*numStars; ZeroMemory(&resourceData, sizeof(resourceData)); resourceData.pSysMem = verticesForQuad; result = device->CreateBuffer(&vertexDesc, &resourceData, &CvertexBuffer); and call each frame Context->IASetVertexBuffers(0, 1, &CvertexBuffer, &stride, &offset); But if i want to add and remove obj i have to recreate the buffer each time, havent i?? There is a faster way? I think i can create a vertex buffer with a max size (es. 10000 objs) and when i update it set only the 250 position (for 250 onjs for example) and pass this number as the vertexCount to the draw function (numObjs*4), or i'm worng

    Read the article

  • I need to modify a program to use arrays and a method call. Should I modify the running file, the data collection file, or both?

    - by g3n3rallyl0st
    I have to have multiple classes for this program. The problem is, I don't fully understand arrays and how they work, so I'm a little lost. I will post my program I have written thus far so you can see what I'm working with, but I don't expect anyone to DO my assignment for me. I just need to know where to start and I'll try to go from there. I think I need to use a double array since I will be working with decimals since it deals with money, and my method call needs to calculate total price for all items entered by the user. Please help: RUNNING FILE package inventory2; import java.util.Scanner; public class RunApp { public static void main(String[] args) { Scanner input = new Scanner( System.in ); DataCollection theProduct = new DataCollection(); String Name = ""; double pNumber = 0.0; double Units = 0.0; double Price = 0.0; while(true) { System.out.print("Enter Product Name: "); Name = input.next(); theProduct.setName(Name); if (Name.equalsIgnoreCase("stop")) { return; } System.out.print("Enter Product Number: "); pNumber = input.nextDouble(); theProduct.setpNumber(pNumber); System.out.print("Enter How Many Units in Stock: "); Units = input.nextDouble(); theProduct.setUnits(Units); System.out.print("Enter Price Per Unit: "); Price = input.nextDouble(); theProduct.setPrice(Price); System.out.print("\n Product Name: " + theProduct.getName()); System.out.print("\n Product Number: " + theProduct.getpNumber()); System.out.print("\n Amount of Units in Stock: " + theProduct.getUnits()); System.out.print("\n Price per Unit: " + theProduct.getPrice() + "\n\n"); System.out.printf("\n Total cost for %s in stock: $%.2f\n\n\n", theProduct.getName(), theProduct.calculatePrice()); } } } DATA COLLECTION FILE package inventory2; public class DataCollection { String productName; double productNumber, unitsInStock, unitPrice, totalPrice; public DataCollection() { productName = ""; productNumber = 0.0; unitsInStock = 0.0; unitPrice = 0.0; } //setter methods public void setName(String name) { productName = name; } public void setpNumber(double pNumber) { productNumber = pNumber; } public void setUnits(double units) { unitsInStock = units; } public void setPrice(double price) { unitPrice = price; } //getter methods public String getName() { return productName; } public double getpNumber() { return productNumber; } public double getUnits() { return unitsInStock; } public double getPrice() { return unitPrice; } public double calculatePrice() { return (unitsInStock * unitPrice); } }

    Read the article

  • AppEngine JRuby - OutOfMemoryError: Java heap space - can it be solved?

    - by elado
    I use AppEngine JRuby on Rails (SDK version 1.3.3.1) - a problem I encounter often is that after a few requests the server is getting really SLOW, until it dies and throws OutOfMemoryError on the terminal (OSX). The requests themselves are very lightweight, not more than looking for an entity or saving it, using DataMapper. On appspot, this problem is not happening. Is there any way to enlarge the heap space for JRuby? The exception log: Exception in thread "Timer-2" java.lang.OutOfMemoryError: Java heap space Apr 29, 2010 8:08:22 AM com.google.apphosting.utils.jetty.JettyLogger warn WARNING: Error for /users/close_users java.lang.OutOfMemoryError: Java heap space at org.jruby.RubyHash.internalPut(RubyHash.java:480) at org.jruby.RubyHash.internalPut(RubyHash.java:461) at org.jruby.RubyHash.fastASet(RubyHash.java:837) at org.jruby.RubyArray.makeHash(RubyArray.java:2998) at org.jruby.RubyArray.makeHash(RubyArray.java:2992) at org.jruby.RubyArray.op_diff(RubyArray.java:3103) at org.jruby.RubyArray$i_method_1_0$RUBYINVOKER$op_diff.call(org/jruby/RubyArray$i_method_1_0$RUBYINVOKER$op_diff.gen) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:146) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.runtime.InterpretedBlock.evalBlockBody(InterpretedBlock.java:373) at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:346) at org.jruby.runtime.InterpretedBlock.yield(InterpretedBlock.java:303) at org.jruby.runtime.Block.yield(Block.java:194) at org.jruby.RubyArray.collect(RubyArray.java:2354) at org.jruby.RubyArray$i_method_0_0$RUBYFRAMEDINVOKER$collect.call(org/jruby/RubyArray$i_method_0_0$RUBYFRAMEDINVOKER$collect.gen) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:115) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:122) at org.jruby.ast.CallNoArgBlockNode.interpret(CallNoArgBlockNode.java:64) at org.jruby.ast.CallNoArgNode.interpret(CallNoArgNode.java:61) at org.jruby.ast.LocalAsgnNode.interpret(LocalAsgnNode.java:123) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:98) at org.jruby.ast.BeginNode.interpret(BeginNode.java:83) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.ast.EnsureNode.interpret(EnsureNode.java:96) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:201) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:183)

    Read the article

  • application: didFinishLaunchingWithOptions doesn't execute, but RootViewController: viewDidLoad does

    - by BeachRunnerJoe
    I'm playing around with the iPad SplitView template and it was working fine before I started swapping out view objects in my RootViewController. When it was working fine, the application:didFinishLaunchingWithOptions method would be called and would setup my persistant store objects, then the RootViewController:viewDidLoad method would be called to populate my rootView with data from my store. I opened up IB and started swapping out view objects in my RootView and now the application:didFinishLaunchingWithOptions method never gets called, but the RootViewController:viewDidLoad method still does. Obviously, the app crashes because the viewDidLoad method depends on the successful execution of the didFinishLauchingWIthOptions method to setup the persistent store objects. Does anyone have any thoughts on what is causing this or how I can go about investigating what's causing this? I'm obviously new to iPhone OS development, so I apologize if this questions is absurd in any way. Thanks so much in advance for your help!

    Read the article

  • Vim script to compile TeX source and launch PDF only if no errors

    - by Jeet
    Hi, I am switching to using Vim for for my LaTeX editing environment. I would like to be able to tex the source file from within Vim, and launch an external viewing if the compile was successful. I know about the Vim-Latex suite, but, if possible, would prefer to avoid using it: it is pretty heavy-weight, hijacks a lot of my keys, and clutters up my vimruntime with a lot of files. Here is what I have now: if exists('b:tex_build_mapped') finish endif " use maparg or mapcheck to see if key is free command! -buffer -nargs=* BuildTex call BuildTex(0, <f-args>) command! -buffer -nargs=* BuildAndViewTex call BuildTex(1, <f-args>) noremap <buffer> <silent> <F9> <Esc>:call BuildTex(0)<CR> noremap <buffer> <silent> <S-F9> <Esc>:call BuildTex(1)<CR> let b:tex_build_mapped = 1 if exists('g:tex_build_loaded') finish endif let g:tex_build_loaded = 1 function! BuildTex(view_results, ...) write if filereadable("Makefile") " If Makefile is available in current working directory, run 'make' with arguments echo "(using Makefile)" let l:cmd = "!make ".join(a:000, ' ') echo l:cmd execute l:cmd if a:view_results && v:shell_error == 0 call ViewTexResults() endif else let b:tex_flavor = 'pdflatex' compiler tex make % if a:view_results && v:shell_error == 0 call ViewTexResults() endif endif endfunction function! ViewTexResults(...) if a:0 == 0 let l:target = expand("%:p:r") . ".pdf" else let l:target = a:1 endif if has('mac') execute "! open -a Preview ".l:target endif endfunction The problem is that v:shell_error is not set, even if there are compile errors. Any suggestions or insight on how to detect whether a compile was successful or not would be greatly appreciated! Thanks!

    Read the article

  • Understanding the Silverlight Dispatcher

    - by Matt
    I had a Invalid Cross Thread access issue, but a little research and I managed to fix it by using the Dispatcher. Now in my app I have objects with lazy loading. I'd make an Async call using WCF and as usual I use the Dispatcher to update my objects DataContext, however it didn't work for this scenario. I did however find a solution here. Here's what I don't understand. In my UserControl I have code to call an Toggle method on my object. The call to this method is within a Dispatcher like so. Dispatcher.BeginInvoke( () => _CurrentPin.ToggleInfoPanel() ); As I mentioned before this was not enough to satisfy Silverlight. I had to make another Dispatcher call within my object. My object is NOT a UIElement, but a simple class that handles all its own loading/saving. So the problem was fixed by calling Deployment.Current.Dispatcher.BeginInvoke( () => dataContext.Detail = detail ); within my class. Why did I have to call the Dispatcher twice to achieve this? Shouldn't a high-level call be enough? Is there a difference between the Deployment.Current.Dispatcher and the Dispatcher in a UIElement?

    Read the article

  • Java ThreadPoolExecutor getting stuck while using ArrayBlockingQueue

    - by Ravi Rao
    Hi, I'm working on some application and using ThreadPoolExecutor for handling various tasks. ThreadPoolExecutor is getting stuck after some duration. To simulate this in a simpler environment, I've written a simple code where I'm able to simulate the issue. import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.RejectedExecutionHandler; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class MyThreadPoolExecutor { private int poolSize = 10; private int maxPoolSize = 50; private long keepAliveTime = 10; private ThreadPoolExecutor threadPool = null; private final ArrayBlockingQueue&lt;Runnable&gt; queue = new ArrayBlockingQueue&lt;Runnable&gt;( 100000); public MyThreadPoolExecutor() { threadPool = new ThreadPoolExecutor(poolSize, maxPoolSize, keepAliveTime, TimeUnit.SECONDS, queue); threadPool.setRejectedExecutionHandler(new RejectedExecutionHandler() { @Override public void rejectedExecution(Runnable runnable, ThreadPoolExecutor threadPoolExecutor) { System.out .println(&quot;Execution rejected. Please try restarting the application.&quot;); } }); } public void runTask(Runnable task) { threadPool.execute(task); } public void shutDown() { threadPool.shutdownNow(); } public ThreadPoolExecutor getThreadPool() { return threadPool; } public void setThreadPool(ThreadPoolExecutor threadPool) { this.threadPool = threadPool; } public static void main(String[] args) { MyThreadPoolExecutor mtpe = new MyThreadPoolExecutor(); for (int i = 0; i &lt; 1000; i++) { final int j = i; mtpe.runTask(new Runnable() { @Override public void run() { System.out.println(j); } }); } } } Try executing this code a few times. It normally print outs the number on console and when all threads end, it exists. But at times, it finished all task and then is not getting terminated. The thread dump is as follows: MyThreadPoolExecutor [Java Application] MyThreadPoolExecutor at localhost:2619 (Suspended) Daemon System Thread [Attach Listener] (Suspended) Daemon System Thread [Signal Dispatcher] (Suspended) Daemon System Thread [Finalizer] (Suspended) Object.wait(long) line: not available [native method] ReferenceQueue&lt;T&gt;.remove(long) line: not available ReferenceQueue&lt;T&gt;.remove() line: not available Finalizer$FinalizerThread.run() line: not available Daemon System Thread [Reference Handler] (Suspended) Object.wait(long) line: not available [native method] Reference$Lock(Object).wait() line: 485 Reference$ReferenceHandler.run() line: not available Thread [pool-1-thread-1] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-2] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-3] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-4] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-6] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-8] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-5] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-10] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-9] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [pool-1-thread-7] (Suspended) Unsafe.park(boolean, long) line: not available [native method] LockSupport.park(Object) line: not available AbstractQueuedSynchronizer$ConditionObject.await() line: not available ArrayBlockingQueue&lt;E&gt;.take() line: not available ThreadPoolExecutor.getTask() line: not available ThreadPoolExecutor$Worker.run() line: not available Thread.run() line: not available Thread [DestroyJavaVM] (Suspended) C:\Program Files\Java\jre1.6.0_07\bin\javaw.exe (Jun 17, 2010 10:42:33 AM) In my actual application,ThreadPoolExecutor threads go in this state and then it stops responding. Regards, Ravi Rao

    Read the article

  • MIPS return address in main

    - by Alexander
    I am confused why in the code below I need to decrement the stack pointer and store the return address again. If I don't do that... then PCSpim keeps on looping.. Why is that? ######################################################################################################################## ### main ######################################################################################################################## .text .globl main main: addi $sp, $sp, -4 # Make space on stack sw $ra, 0($sp) # Save return address # Start test 1 ############################################################ la $a0, asize1 # 1st parameter: address of asize1[0] la $a1, frame1 # 2nd parameter: address of frame1[0] la $a2, window1 # 3rd parameter: address of window1[0] jal vbsme # call function # Printing $v0 add $a0, $v0, $zero # Load $v0 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Printing $v1 add $a0, $v1, $zero # Load $v1 for printing li $v0, 1 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall # Print newline. la $a0, newline # Load value for printing li $v0, 4 # Load the system call numbers syscall ############################################################ # End of test 1 lw $ra, 0($sp) # Restore return address addi $sp, $sp, 4 # Restore stack pointer jr $ra # Return ######################################################################################################################## ### vbsme ######################################################################################################################## #.text .globl vbsme vbsme: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address exit: add $v1, $t5, $zero # (v1) x coordinate of the block in the frame with the minimum SAD add $v0, $t4, $zero # (v0) y coordinate of the block in the frame with the minimum SAD lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer jr $ra # return If I delete: addi $sp, $sp, -4 # create space on the stack pointer sw $ra, 0($sp) # save return address and lw $ra, 0($sp) # restore return address addi $sp, $sp, 4 # restore stack pointer on vbsme: PCSpim keeps on running... Why??? I shouldn't have to increment/decrement the stack pointer on vbsme and then do the jr again right? The jal in main is supposed to handle that

    Read the article

  • LazyInitializationException when adding to a list that is held within a entity class using hibernate

    - by molleman
    Right so i am working with hibernate gilead and gwt to persist my data on users and files of a website. my users have a list of file locations. i am using annotations to map my classes to the database. i am getting a org.hibernate.LazyInitializationException when i try to add file locations to the list that is held in the user class. this is a method below that is overridden from a external file upload servlet class that i am using. when the file uploads it calls this method. the user1 is loaded from the database elsewhere. the exception occurs at user1.getFileLocations().add(fileLocation); . i dont understand it really at all.! any help would be great. the stack trace of the error is below public String executeAction(HttpServletRequest request, List<FileItem> sessionFiles) throws UploadActionException { for (FileItem item : sessionFiles) { if (false == item.isFormField()) { try { YFUser user1 = (YFUser)getSession().getAttribute(SESSION_USER); // This is the location where a file will be stored String fileLocationString = "/Users/Stefano/Desktop/UploadedFiles/" + user1.getUsername(); File fl = new File(fileLocationString); fl.mkdir(); // so here i will create the a file container for my uploaded file File file = File.createTempFile("upload-", ".bin",fl); // this is where the file is written to disk item.write(file); // the FileLocation object is then created FileLocation fileLocation = new FileLocation(); fileLocation.setLocation(fileLocationString); //test System.out.println("file path = "+file.getPath()); user1.getFileLocations().add(fileLocation); //the line above is where the exception occurs } catch (Exception e) { throw new UploadActionException(e.getMessage()); } } removeSessionFileItems(request); } return null; } //This is the class file for a Your Files User @Entity @Table(name = "yf_user_table") public class YFUser implements Serializable,ILightEntity { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "user_id",nullable = false) private int userId; @Column(name = "username") private String username; @Column(name = "password") private String password; @Column(name = "email") private String email; @ManyToMany(cascade = CascadeType.ALL) @JoinTable(name = "USER_FILELOCATION", joinColumns = { @JoinColumn(name = "user_id") }, inverseJoinColumns = { @JoinColumn(name = "locationId") }) private List<FileLocation> fileLocations = new ArrayList<FileLocation>() ; public YFUser(){ } public int getUserId() { return userId; } private void setUserId(int userId) { this.userId = userId; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public List<FileLocation> getFileLocations() { if(fileLocations ==null){ fileLocations = new ArrayList<FileLocation>(); } return fileLocations; } public void setFileLocations(List<FileLocation> fileLocations) { this.fileLocations = fileLocations; } /* public void addFileLocation(FileLocation location){ fileLocations.add(location); }*/ @Override public void addProxyInformation(String property, Object proxyInfo) { // TODO Auto-generated method stub } @Override public String getDebugString() { // TODO Auto-generated method stub return null; } @Override public Object getProxyInformation(String property) { // TODO Auto-generated method stub return null; } @Override public boolean isInitialized(String property) { // TODO Auto-generated method stub return false; } @Override public void removeProxyInformation(String property) { // TODO Auto-generated method stub } @Override public void setInitialized(String property, boolean initialised) { // TODO Auto-generated method stub } @Override public Object getValue() { // TODO Auto-generated method stub return null; } } @Entity @Table(name = "fileLocationTable") public class FileLocation implements Serializable { @Id @GeneratedValue(strategy = GenerationType.AUTO) @Column(name = "locationId", updatable = false, nullable = false) private int ieId; @Column (name = "location") private String location; public FileLocation(){ } public int getIeId() { return ieId; } private void setIeId(int ieId) { this.ieId = ieId; } public String getLocation() { return location; } public void setLocation(String location) { this.location = location; } } Apr 2, 2010 11:33:12 PM org.hibernate.LazyInitializationException <init> SEVERE: failed to lazily initialize a collection of role: com.example.client.YFUser.fileLocations, no session or session was closed org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.example.client.YFUser.fileLocations, no session or session was closed at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationException(AbstractPersistentCollection.java:380) at org.hibernate.collection.AbstractPersistentCollection.throwLazyInitializationExceptionIfNotConnected(AbstractPersistentCollection.java:372) at org.hibernate.collection.AbstractPersistentCollection.initialize(AbstractPersistentCollection.java:365) at org.hibernate.collection.AbstractPersistentCollection.write(AbstractPersistentCollection.java:205) at org.hibernate.collection.PersistentBag.add(PersistentBag.java:297) at com.example.server.TestServiceImpl.saveFileLocation(TestServiceImpl.java:132) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at net.sf.gilead.gwt.PersistentRemoteService.processCall(PersistentRemoteService.java:174) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:224) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:713) at javax.servlet.http.HttpServlet.service(HttpServlet.java:806) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:487) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:362) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:729) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.RequestLogHandler.handle(RequestLogHandler.java:49) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:324) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:505) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:843) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:647) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:211) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:380) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:396) at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:488) Apr 2, 2010 11:33:12 PM net.sf.gilead.core.PersistentBeanManager clonePojo INFO: Third party instance, not cloned : org.hibernate.LazyInitializationException: failed to lazily initialize a collection of role: com.example.client.YFUser.fileLocations, no session or session was closed

    Read the article

  • catching a deadlock in a simple odd-even sending

    - by user562264
    I'm trying to solve a simple problem with MPI, my implementation is MPICH2 and my code is in fortran. I have used the blocking send and receive, the idea is so simple but when I run it it crashes!!! I have absolutely no idea what is wrong? can anyone make quote on this issue please? there is a piece of the code: integer,parameter::IM=100,JM=100 REAL,ALLOCATABLE ::T(:,:),TF(:,:) CALL MPI_COMM_RANK(MPI_COMM_WORLD,RNK,IERR) CALL MPI_COMM_SIZE(MPI_COMM_WORLD,SIZ,IERR) prv = rnk-1 nxt = rnk+1 LIM = INT(IM/SIZ) IF (rnk==0) THEN ALLOCATE(TF(IM,JM)) prv = MPI_PROC_NULL ELSEIF(rnk==siz-1) THEN NXT = MPI_PROC_NULL LIM = LIM+MOD(IM,SIZ) END IF IF (MOD(RNK,2)==0) THEN CALL MPI_SEND(T(2,:),JM+2,MPI_REAL,PRV,10,MPI_COMM_WORLD,IERR) CALL MPI_RECV(T(1,:),JM+2,MPI_REAL,PRV,20,MPI_COMM_WORLD,STAT,IERR) ELSE CALL MPI_RECV(T(LIM+2,:),JM+2,MPI_REAL,NXT,10,MPI_COMM_WORLD,STAT,IERR) CALL MPI_SEND(T(LIM+1,:),JM+2,MPI_REAL,NXT,20,MPI_COMM_WORLD,IERR) END IF as I understood even processes are not receiving anything while the odd ones finish sending successfully, in some cases when I added some print to observe what is going on I saw that the variable NXT is changing during the sending procedure!!! for example all the odd process was sending message to process 0 not their next one!

    Read the article

  • Reflection to access advanced telephony features

    - by Tyler
    I am trying to use reflection to access some advanced features of the telephony api not published. Currently I am having trouble instantiating a serviceManager object that is needed to get the "phone" service as a binder which I can then use to instantiate a telephony object which is needed to make a call, end call, etc... currently when I make the call serviceManagerObject = tempInterfaceMethod.invoke(null, new Object[] { new Binder() }); it returns a nullPointerException. I believe this has to due with creating a new Binder instead of sending the appropriate binder (which I am unsure of which one is appropriate) public void placeReflectedCall() throws ClassNotFoundException, SecurityException, NoSuchMethodException, IllegalArgumentException, IllegalAccessException, InvocationTargetException, InstantiationException { String serviceManagerName = "android.os.IServiceManager"; String serviceManagerNativeName = "android.os.ServiceManagerNative"; String telephonyName = "com.android.internal.telephony.ITelephony"; Class telephonyClass; Class telephonyStubClass; Class serviceManagerClass; Class serviceManagerStubClass; Class serviceManagerNativeClass; Class serviceManagerNativeStubClass; Method telephonyCall; Method telephonyEndCall; Method telephonyAnswerCall; Method getDefault; Method[] temps; Constructor[] serviceManagerConstructor; // Method getService; Object telephonyObject; Object serviceManagerObject; String number = "1111111111"; telephonyClass = Class.forName(telephonyName); telephonyStubClass = telephonyClass.getClasses()[0]; serviceManagerClass = Class.forName(serviceManagerName); serviceManagerNativeClass = Class.forName(serviceManagerNativeName); Method getService = // getDefaults[29]; serviceManagerClass.getMethod("getService", String.class); Method tempInterfaceMethod = serviceManagerNativeClass.getMethod( "asInterface", IBinder.class); // this does not work serviceManagerObject = tempInterfaceMethod.invoke(null, new Object[] { new Binder() }); IBinder retbinder = (IBinder) getService.invoke(serviceManagerObject, "phone"); Method serviceMethod = telephonyStubClass.getMethod("asInterface", IBinder.class); telephonyObject = serviceMethod .invoke(null, new Object[] { retbinder }); telephonyCall = telephonyClass.getMethod("call", String.class); telephonyEndCall = telephonyClass.getMethod("endCall"); telephonyAnswerCall = telephonyClass.getMethod("answerRingingCall"); telephonyCall.invoke(telephonyObject, number); } Thanks in advance for any answers.

    Read the article

  • Is this good C# style?

    - by burnt1ce
    Consider the following method signature: public static bool TryGetPolls(out List<Poll> polls, out string errorMessage) This method performs the following: accesses the database to generate a list of Poll objects. returns true if it was success and errorMessage will be an empty string returns false if it was not successful and errorMessage will contain an exception message. Is this good style? Update: Lets say i do use the following method signature: public static List<Poll> GetPolls() and in that method, it doesn't catch any exceptions (so i depend the caller to catch exceptions). How do i dispose and close all the objects that is in the scope of that method? As soon as an exception is thrown, the code that closes and disposes objects in the method is no longer reachable.

    Read the article

< Previous Page | 256 257 258 259 260 261 262 263 264 265 266 267  | Next Page >