Search Results

Search found 23968 results on 959 pages for 'tail call'.

Page 286/959 | < Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >

  • Question about effective logging in C#

    - by MartyIX
    I've written a simple class for debugging and I call the method Debugger.WriteLine(...) in my code like this: Debugger.WriteLine("[Draw]", "InProgress", "[x,y] = " + x.ToString("0.00") + ", " + y.ToString("0.00") + "; pos = " + lastPosX.ToString() + "x" + lastPosY.ToString() + " -> " + posX.ToString() + "x" + posY.ToString() + "; SS = " + squareSize.ToString() + "; MST = " + startTime.ToString("0.000") + "; Time = " + time.ToString() + phase.ToString(".0000") + "; progress = " + progress.ToString("0.000") + "; step = " + step.ToString() + "; TimeMovementEnd = " + UI.MovementEndTime.ToString()); The body of the procedure Debugger.WriteLine is compiled only in Debug mode (directives #if, #endif). What makes me worry is that I often need ToString() in Debugger.WriteLine call which is costly because it creates still new strings (for changing number for example). How to solve this problem? A few points/questions about debugging/tracing: I don't want to wrap every Debugger.WriteLine in an IF statement or to use preprocessor directives in order to leave out debugging methods because it would inevitable lead to a not very readable code and it requires too much typing. I don't want to use any framework for tracing/debugging. I want to try to program it myself. Are Trace methods (http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.aspx) left out if compiling in release mode? If it is so is it possible that my methods would behave similarly? http://msdn.microsoft.com/en-us/library/fht0f5be.aspx output = String.Format("You are now {0} years old.", years); Which seems nice. Is it a solution for my problem with ToString()?

    Read the article

  • comparing strings from two different sources in javascript

    - by andy-score
    This is a bit of a specific request unfortunately. I have a CMS that allows a user to input text into a tinymce editor for various posts they have made. The editor is loaded via ajax to allow multiple posts to be edited from one page. I want to be able to check if there were edits made to the main text if cancel is clicked. Currently I get the value of the text from the database during the ajax call, json_encode it, then store it in a javascript variable during the callback, to be checked against later. When cancel is clicked the current value of the hidden textarea (used by tinymce to store the data for submission) is grabbed using jquery.val() and checked against the stored value from the previous ajax call like this: if(stored_value!=textarea.val()) { return true } It currently always returns true, even if no changes have been made. The issue seems to be that the textarea.val() uses html entities, whereas the ajax jsoned version doesn't. the response from ajax in firebug looks like this: <p>some text<\/p>\r\n<p>some more text<\/p> the textarea source code looks like this: &lt;p&gt;some text&lt;/p&gt; &lt;p&gt;some more text&lt;/p&gt; these are obviously different, but how can I get them to be treated as the same when evaluated? Is there a function that compares the final output of a string or a way to convert one string to the other using javascript? I tried using html entities in the ajax page, but this returned the string with html entities intact when alerted, I assume because json_encoding it turned them into characters. Any help would be greatly appreciated.

    Read the article

  • VBScript: Disable caching of response from server to HTTP GET URL request

    - by Rob
    I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this? When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time. This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem): With CreateObject("MSXML2.XMLHTTP") .open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False .send nsrresponse =.responseText End With Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL? I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read. I am also trying to force not using the cache using http header settings and html document header meta data: Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0. print $httpGetCGIRequest-header( -type = 'text/html', -expires = '+0s', ); Http header settings in response sent back to client: <html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head> <body> response message generated from server </body> </html> The above http header and html document head settings haven't worked, hence my question.

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

  • wsimport generate a client with cookies

    - by dierre
    I'm generating a client for a SOAP 1.2 service using wsimport from the jaxws-maven-plugin in maven with the following execution: <groupId>org.jvnet.jax-ws-commons</groupId> <artifactId>jaxws-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <goals> <goal>wsimport</goal> </goals> <configuration> <sourceDestDir>${project.basedir}/src/main/java</sourceDestDir> <wsdlUrls> <wsdlUrl>${webservice.url}</wsdlUrl> </wsdlUrls> <extension>true</extension> </configuration> </execution> The first time the client call the proxy, the load balancer generate a cookie and sends it back. The client should send it back so the load balancer knows where (which server) is dedicated to a specific client (the idea is that the first time the client get a server and the cookie identifies the server, then the load balancer sends the client to the same server for every call) Now, is there a way to tell to the plugin to enable automatically the cookie handling?

    Read the article

  • dynamically created controls and responding to control events

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • How to associate static entity instances in a Session without database retrieval

    - by Michael Hedgpeth
    I have a simple Result class that used to be an Enum but has evolved into being its own class with its own table. public class Result { public static readonly Result Passed = new Result(StatusType.Passed) { Id = [Predefined] }; public static readonly Result NotRun = new Result(StatusType.NotRun) { Id = [Predefined] }; public static readonly Result Running = new Result(StatusType.Running) { Id = [Predefined] }; } Each of these predefined values has a row in the database at their predefined Guid Id. There is then a failed result that has an instance per failure: public class FailedResult : Result { public FailedResult(string description) : base(StatusType.Failed) { . . . } } I then have an entity that has a Result: public class Task { public Result Result { get; set; } } When I save a Task, if the Result is a predefined one, I want NHibernate to know that it doesn't need to save that to the database, nor does it need to fetch it from the database; I just want it to save by Id. The way I get around this is when I am setting up the session, I call a method to load the static entities: protected override void OnSessionOpened(ISession session) { LockStaticResults(session, Result.Passed, Result.NotRun, Result.Running); } private static void LockStaticResults(ISession session, params Result[] results) { foreach (var result in results) { session.Load(result, result.Id); } } The problem with the session.Load method call is it appears to be fetching to the database (something I don't want to do). How could I make this so it does not fetch the database, but trusts that my static (immutable) Result instances are both up to date and a part of the session?

    Read the article

  • iPhone Rendering Question

    - by slythic
    Hi all, I'm new to iPhone/Objective-C development. I "jumped the gun" and started reading and implementing some chapters from O'Reilly's iPhone Development. I was following the ebook's code exactly and my code was generating the following error: CGContextSetFillColorWithColor: invalid context CGContextFillRects: invalid context CGContextSetFillColorWithColor: invalid context CGContextGetShouldSmoothFonts: invalid context However, when I downloaded the sample code for the same chapter the code is different. Book Code: - (void) Render { CGContextRef g = UIGraphicsGetCurrentContext(); //fill background with gray CGContextSetFillColorWithColor(g, [UIColor grayColor].CGColor); CGContextFillRect(g, CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)); //draw text in black. CGContextSetFillColorWithColor(g, [UIColor blackColor].CGColor); [@"O'Reilly Rules!" drawAtPoint:CGPointMake(10.0, 20.0) withFont:[UIFont systemFontOfSize:[UIFont systemFontSize]]]; } Actual Project Code from the website (works): - (void) Render { [self setNeedsDisplay]; //this sets up a deferred call to drawRect. } - (void)drawRect:(CGRect)rect { CGContextRef g = UIGraphicsGetCurrentContext(); //fill background with gray CGContextSetFillColorWithColor(g, [UIColor grayColor].CGColor); CGContextFillRect(g, CGRectMake(0, 0, self.frame.size.width, self.frame.size.height)); //draw text in black. CGContextSetFillColorWithColor(g, [UIColor blackColor].CGColor); [@"O'Reilly Rules!" drawAtPoint:CGPointMake(10.0, 20.0) withFont:[UIFont systemFontOfSize:[UIFont systemFontSize]]]; } What is it about these lines of code that make the app render correctly? - (void) Render { [self setNeedsDisplay]; //this sets up a deferred call to drawRect. } - (void)drawRect:(CGRect)rect { Thanks in advance for helping out a newbie!

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • Setting processor affinity on CSC.exe launched by CoreCompile MSBuild Task

    - by Hardy
    I am wondering if there is simple way to ensure that when a c# project is compiled the CSC.exe launched inherits the parent processor affinity settings, or perhaps of a way where by i can supply this. I have been trying to accomplish this by launching a bat file from vs.net cmd prompt like start /affinity 01 custombuild.cmd and inside my custombuild.cmd i have @echo off msbuild Libraries.sln /t:rebuild /p:Configuration=Release;platform=x64 /m:1 :END The command line call to Csc.exe this generates looks like the following C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. What i 'd like to see is the CSC.exe to inherit the processor affinity or a simple way to be able to override how csc.exe call is generated so i can make it into a start /affinity 01 C:\Windows\Microsoft.NET\Framework\v4.0.30319\Csc.exe ... ignoring the rest for brevity. I also noticed that CoreCompile target is defined in Microsoft.CSharp.targets, should i be considering overriding MSBuildToolsPath variable so i can sneak in my own version. This feels rather hacky. Any help would be much appreciated.

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • Why do pure virtual base classes get direct access to static data members while derived instances do

    - by Shamster
    I've created a simple pair of classes. One is pure virtual with a static data member, and the other is derived from the base, as follows: #include <iostream> template <class T> class Base { public: Base (const T _member) { member = _member; } static T member; virtual void Print () const = 0; }; template <class T> T Base<T>::member; template <class T> void Base<T>::Print () const { std::cout << "Base: " << member << std::endl; } template <class T> class Derived : public Base<T> { public: Derived (const T _member) : Base<T>(_member) { } virtual void Print () const { std::cout << "Derived: " << this->member << std::endl; } }; I've found from this relationship that when I need access to the static data member in the base class, I can call it with direct access as if it were a regular, non-static class member. i.e. - the Base::Print() method does not require a this- modifier. However, the derived class does require the this-member indirect access syntax. I don't understand why this is. Both class methods are accessing the same static data, so why does the derived class need further specification? A simple call to test it is: int main () { Derived<double> dd (7.0); dd.Print(); return 0; } which prints the expected "Derived: 7"

    Read the article

  • Passing a list of ints to WebMethod using jQuery and ajax.

    - by birdus
    I'm working on a web page (ASP.NET 4.0) and am just starting simple to try and get this ajax call working (I'm an ajax/jQuery neophyte) and I'm getting an error on the call. Here's the js: var TestParams = new Object; TestParams.Items = new Object; TestParams.Items[0] = 1; TestParams.Items[1] = 5; TestParams.Items[2] = 10; var finalObj = JSON.stringify(TestParams); var _url = 'AdvancedSearch.aspx/TestMethod'; $(document).ready(function () { $.ajax({ type: "POST", url: _url, data: finalObj, contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { $(".main").html(msg.d); }, error: function (xhr, ajaxOptions, thrownError) { alert(thrownError.toString()); } }); Here's the method in my code behind file: [Serializable] public class TestParams { public List<int> Items { get; set; } } public partial class Search : Page { [WebMethod] public static string TestMethod(TestParams testParams) { // I never hit a breakpoint in here // do some stuff // return some stuff return ""; } } Here's the stringified json I'm sending back: {"Items":{"0":1,"1":5,"2":10}} When I run it, I get this error: Microsoft JScript runtime error: 'undefined' is null or not an object It breaks on the error function. I've also tried this variation on building the json (based on a sample on a website) with this final json: var TestParams = new Object; TestParams.Positions = new Object; TestParams.Positions[0] = 1; TestParams.Positions[1] = 5; TestParams.Positions[2] = 10; var DTO = new Object; DTO.positions = TestParams; var finalObj = JSON.stringify(DTO) {"positions":{"Positions":{"0":1,"1":5,"2":10}}} Same error message. It doesn't seem like it should be hard to send a list of ints from a web page to my webmethod. Any ideas? Thanks, Jay

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • ASP.NET MVC: How to create a usable UrlHelper instance?

    - by Marek
    I am using quartz.net to schedule regular events within asp.net mvc application. The scheduled job should call a service layer script that requires a UrlHelper instance (for creating Urls based on correct routes (via urlHelper.Action(..)) contained in emails that will be sent by the service). I do not want to hardcode the links into the emails - they should be resolved using the urlhelper. The job: public class EvaluateRequestsJob : Quartz.IJob { public void Execute(JobExecutionContext context) { // where to get a usable urlHelper instance? ServiceFactory.GetRequestService(urlHelper).RunEvaluation(); } } Please note that this is not run within the MVC pipeline. There is no current request being served, the code is run by the Quartz scheduler at defined times. How do I get a UrlHelper instance usable on the indicated place? If it is not possible to construct a UrlHelper, the other option I see is to make the job "self-call" a controller action by doing a HTTP request - while executing the action I will of course have a UrlHelper instance available - but this seems a little bit hacky to me.

    Read the article

  • Objective-C: How to access methods in other classes

    - by Adam
    I have what I know is a simple question, but after many searches in books and on the Internet, I can't seem to come up with a solution. I have a standard iPhone project that contains, among other things, a ViewController. My app works just fine at this point. I now want to create a generic class (extending NSObject) that will have some basic utility methods. Let's call this class Util.m (along with the associated .h file). I create the Util class (and .h file) in my project, and now I want to access the methods contained in that class class from my ViewController. Here's an example of a simple version of Util.h #import <Foundation/Foundation.h> @interface Util : NSObject { } - (void)myMethod; @end Then the Util.m file would look something like this: #import "Util.h" @implementation Util - (void)myMethod { NSLog(@"myMethod Called"); } @end Now that my Util class is created, I want to call the "myMethod" method from my ViewController. In my ViewController's .h file, I do the following: #import "Util.h" @interface MyViewController : UIViewController { Util *utils; } @property (assign) Util *utils; @end Finally, in the ViewController.m, I do the following: #import "Util.h" @implementation MyViewController @synthesize utils; - (void)viewDidLoad { [super viewDidLoad]; utils.myMethod; //this doesn't work [utils myMethod]; //this doesn't work either NSLog(@"utils = %@", utils); //in the console, this prints "utils = (null)" } What am I doing wrong? I'd like to not only be able to directly reference other classes/methods in a simple util class like this, but I'd also like to directly reference other ViewControllers and their properties and methods as well. I'm stumped! Please Help.

    Read the article

  • PySide Qt script doesn't launch from Spyder but works from shell

    - by Maxim Zaslavsky
    I have a weird bug in my project that uses PySide for its Qt GUI, and in response I'm trying to test with simpler code that sets up the environment. Here is the code I am testing with: http://stackoverflow.com/a/6906552/130164 When I launch that from my shell (python test.py), it works perfectly. However, when I run that script in Spyder, I get the following error: Traceback (most recent call last): File "/home/test/Desktop/test/test.py", line 31, in <module> app = QtGui.QApplication(sys.argv) RuntimeError: A QApplication instance already exists. If it helps, I also get the following warning: /usr/lib/pymodules/python2.6/matplotlib/__init__.py:835: UserWarning: This call to matplotlib.use() has no effect because the the backend has already been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot, or matplotlib.backends is imported for the first time. Why does that code work when launched from my shell but not from Spyder? Update: Mata answered that the problem happens because Spyder uses Qt, which makes sense. For now, I've set up execution in Spyder using the "Execute in an external system terminal" option, which doesn't cause errors but doesn't allow debugging, either. Does Spyder have any built-in workarounds to this?

    Read the article

  • How do I add and remove an event listener using a function with parameters?

    - by Bungle
    Sorry if this is a common question, but I couldn't find any answers that seemed pertinent through searching. If I attach an event listener like this: window.addEventListener('scroll', function() { check_pos(box); }, false); it doesn't seem to work to try to remove it later, like this: window.removeEventListener('scroll', function() { check_pos(box); }, false); I assume this is because the addEventListener and removeEventListener methods want a reference to the same function, while I've provided them with anonymous functions, which, while identical in code, are not literally the same. How can I change my code to get the call to removeEventListener to work? The "box" argument refers to the name of an <iframe> that I'm tracking on the screen; that is, I want to be able to subscribe to the scroll event once for each <iframe> that I have (the quantity varies), and once the check_pos() function measures a certain position, it will call another function and also remove the event listener to free up system resources. My hunch is that the solution will involve a closure and/or naming the anonymous function, but I'm not sure exactly what that looks like, and would appreciate a concrete example. Hope that makes sense. Thanks for any help!

    Read the article

  • Passing Object to Service in WCF

    - by hgulyan
    Hi, I have my custom class Customer with its properties. I added DataContract mark above the class and DataMember to properties and it was working fine, but I'm calling a service class's function, passing customer instance as parameter and some of my properties get 0 values. While debugging I can see my properties values and after it gets to the function, some properties' values are 0. Why it can be so? There's no code between this two actions. DataContract mark workes fine, everything's ok. Any suggestions on this issue? I tried to change ByRef to ByVal, but it doesn't change anything. Why it would pass other values right and some of integer types just 0? Maybe the answer is simple, but I can't figure it out. Thank You. <DataContract()> Public Class Customer Private Type_of_clientField As Integer = -1 <DataMember(Order:=1)> Public Property type_of_client() As Integer Get Return Type_of_clientField End Get Set(ByVal value As Integer) Type_of_clientField = value End Set End Property End Class <ServiceContract(SessionMode:=SessionMode.Allowed)> <DataContractFormat()> Public Interface CustomerService <OperationContract()> Function addCustomer(ByRef customer As Customer) As Long End Interface type_of_client properties value is 6 before I call addCustomer function. After it enters that function the value is 0. UPDATE: The issue is in instance creating. When I create an instance of a class on client side, that is stored on service side, some of my properties pass 0 or nothing, but when I call a function of a service class, that returns a new instance of that class, it works fine. What's is the difference? Could that be serialization issue?

    Read the article

  • Strange Problem with Webservice and IIS

    - by Rene
    Hello there, I have a Problem which confuses me a little bit, resp. where I don't have any Idea about what it could be. The System I'm using is Windows Vista, IIS 7.0, VS2008, Windows Software Factory, Entity Framework, WCF. The Binding for all Webservices is wshttpbinding. I'm using a Webservice hosted in IIS. This Webservice uses/calls another Webservice (also installed in the IIS). If I use a client calling the first Webservice (which calls the second Webservice) it works fine for about 4-10 Times. And then (it is repeatable to get this Problem, but sometimes it happens after 4, sometimes after 10 Time, but it always will happen), the Service and the IIS gets stuck. Stuck means, that this Webservice isn't callable anymore and generates an timeout after 1 minute. Even increasing Timeout doesn't change anything. If i try to restart the IIS I get an timeout error. So the IIS is also "stuck" (it is not really stuck, but I can't restart it). Only if I kill the w3wp.exe IIS is restartable and the Webservice will work again (until i again call this service several times). The logfiles (i'm no expert in things like logging or where to find/enable such logs, so to say : i'm a newbie) like http-logging, Event Viewer or WCF-Message Logging don't show any hints upon the source of the problem. I don't have this problem when I'm using a Webservice which doesn't call another Service. Calling a Webservice is done by Service Reference (I'm using no Proxy-Classes), but I think this should be no Problem. I have no idea of what is happening, nor how to solve this Problem. Regards Rene

    Read the article

  • Static library woes in iPhone 3.x with categories and C libraries

    - by hgpc
    I have a static library (let's call it S) that uses a category (NSData+Base64 from MGTwitterEngine) and a C library (MiniZip wrapped by ZipArchive). This static library is used in an iPhone 3.x project (let's call it A). To be able to use the MiniZip library I included its files in project A as well as the static library S. If not I get compilation errors. Project A works fine on the simulator. When I run it on the device, I get unrecognized selector errors when the category is used. As pointed out here, it seems there's a linker bug that affects categories in iPhone 3.x (http://stackoverflow.com/questions/1147676/categories-in-static-library-for-iphone-device-3-0). The workaround is to add -all_load to the Other Linker Flags of the project that references the static library. However, if I do this then I get duplicate symbol errors because I included the MiniZip libraries in project A. A workaround is to include the category files in project A as well. If I do this, project A works well in the device, but fails to build on the simulator because of duplicate symbol errors. How should I set up project A to make it work on the simulator and the device with the same configuration?

    Read the article

  • Is there any Java Decompiler that can correctly decompile calls to overloaded methods?

    - by mihi
    Consider this (IMHO simple) example: public class DecompilerTest { public static void main(String[] args) { Object s1 = "The", s2 = "answer"; doPrint((Object) "You should know:"); for (int i = 0; i < 2; i++) { doPrint(s1); doPrint(s2); s1 = "is"; s2 = new Integer(42); } System.out.println(); } private static void doPrint(String s1) { System.out.print("Wrong!"); } private static void doPrint(Object s1) { System.out.print(s1 + " "); } } Compile it with source/target level 1.1 without debug information (i.e. no local variable information should be present) and try to decompile it. I tried Jad, JD-GUI and Fernflower, and all of them got at least one of the call wrong (i. e. the program printed "Wrong!" at least once) Is there really no java decompiler that can infer the right casts so that it will not call the wrong overload?

    Read the article

  • Flex build error

    - by incrediman
    I'm totally new to flex. I'm getting a build error when using flex. That is, I've generated a .c file using flex, and, when running it, am getting this error: 1>lextest.obj : error LNK2001: unresolved external symbol "int __cdecl isatty(int)" (?isatty@@YAHH@Z) 1>C:\...\lextest.exe : fatal error LNK1120: 1 unresolved externals here is the lex file I'm using (grabbed from here): /*** Definition section ***/ %{ /* C code to be copied verbatim */ #include <stdio.h> %} /* This tells flex to read only one input file */ %option noyywrap %% /*** Rules section ***/ /* [0-9]+ matches a string of one or more digits */ [0-9]+ { /* yytext is a string containing the matched text. */ printf("Saw an integer: %s\n", yytext); } . { /* Ignore all other characters. */ } %% /*** C Code section ***/ int main(void) { /* Call the lexer, then quit. */ yylex(); return 0; } As well, why do I have to put a 'main' function in the lex syntax code? What I'd like is to be able to call yylex(); from another c file.

    Read the article

  • updating system's time using .Net

    - by user62958
    I am trying to update my system time using the following: [StructLayout(LayoutKind.Sequential)] private struct SYSTEMTIME { public ushort wYear; public ushort wMonth; public ushort wDayOfWeek; public ushort wDay; public ushort wHour; public ushort wMinute; public ushort wSecond; public ushort wMilliseconds; } [DllImport("kernel32.dll", EntryPoint = "GetSystemTime", SetLastError = true)] private extern static void Win32GetSystemTime(ref SYSTEMTIME lpSystemTime); [DllImport("kernel32.dll", EntryPoint = "SetSystemTime", SetLastError = true)] private extern static bool Win32SetSystemTime(ref SYSTEMTIME lpSystemTime); public void SetTime() { TimeSystem correctTime = new TimeSystem(); DateTime sysTime = correctTime.GetSystemTime(); // Call the native GetSystemTime method // with the defined structure. SYSTEMTIME systime = new SYSTEMTIME(); Win32GetSystemTime(ref systime); // Set the system clock ahead one hour. systime.wYear = (ushort)sysTime.Year; systime.wMonth = (ushort)sysTime.Month; systime.wDayOfWeek = (ushort)sysTime.DayOfWeek; systime.wDay = (ushort)sysTime.Day; systime.wHour = (ushort)sysTime.Hour; systime.wMinute = (ushort)sysTime.Minute; systime.wSecond = (ushort)sysTime.Second; systime.wMilliseconds = (ushort)sysTime.Millisecond; Win32SetSystemTime(ref systime); } When I debug everything looks good and all the values are correct but when it calles the Win32SetSystemTime(ref systime) th actual time of system(display time) doesn't change and stays the same. The strange part is that when I call the Win32GetSystemTime(ref systime) it gives me the new updated time. Can someone give me some help on this?

    Read the article

  • Disposing underlying object from finalizer in an immutable object

    - by Juan Luis Soldi
    I'm trying to wrap around Awesomium and make it look to the rest of my code as close as possible to NET's WebBrowser since this is for an existing application that already uses the WebBrowser. In this library, there is a class called JSObject which represents a javascript object. You can get one of this, for instance, by calling the ExecuteJavascriptWithResult method of the WebView class. If you'd call it like myWebView.ExecuteJavascriptWithResult("document", string.Empty).ToObject(), then you'd get a JSObject that represents the document. I'm writing an immutable class (it's only field is a readonly JSObject object) called JSObjectWrap that wraps around JSObject which I want to use as base class for other classes that would emulate .NET classes such as HtmlElement and HtmlDocument. Now, these classes don't implement Dispose, but JSObject does. What I first thought was to call the underlying JSObject's Dispose method in my JSObjectWrap's finalizer (instead of having JSObjectWrap implement Dispose) so that the rest of my code can stay the way it is (instead of having to add using's everywhere and make sure every JSObjectWrap is being properly disposed). But I just realized if more than two JSObjectWrap's have the same underlying JSObject and one of them gets finalized this will mess up the other JSObjectWrap. So now I'm thinking maybe I should keep a static Dictionary of JSObjects and keep count of how many of each of them are being referenced by a JSObjectWrap but this sounds messy and I think could cause major performance issues. Since this sounds to me like a common pattern I wonder if anyone else has a better idea.

    Read the article

< Previous Page | 282 283 284 285 286 287 288 289 290 291 292 293  | Next Page >