Search Results

Search found 47301 results on 1893 pages for 'system matrix'.

Page 548/1893 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • Internet explorer cannot display the webpage ?

    - by kumar
    Hi guys, i have one application that is deployed in IIS at Remote Desktop. if i access that application from local system i am getting "Internet explorer cannot display the webpage" and it is running fine in Remote Desktop but not in Local system. any solution why? Regards kumar

    Read the article

  • What's "@Override" there for in java?

    - by symfony
    public class Animal { public void eat() { System.out.println("I eat like a generic Animal."); } } public class Wolf extends Animal { @Override public void eat() { System.out.println("I eat like a wolf!"); } } Does @Override actually have some functionality or it's just kinda comment?

    Read the article

  • Calling Class from another File ASP.NET VB.NET

    - by davemackey
    Lets say I have a class like this in class1.vb: Public Class my_class Public Sub my_sub() Dim myvar as String myvar = 10 Session("myvar") = myvar End Sub End Class Then I have a ASP.NET page with a code-behind file, default.aspx and default.aspx.vb and I want to call my_class. I'm doing the following, but it doesn't work: Imports my_app.my_class Partial Public Class _default Inherits System.Web.UI.Page Protected Sub Page_Load(ByVal sender as Object, ByVal e as System.EventArgs) Handles Me.Load my_class() End Sub End Class I get a "Reference to a non-shared member requires an object reference"

    Read the article

  • Dealing with missing messages in JavaScript when using BOSH

    - by JamieD
    We recently went into private beta on our flagship product and had a small launch event. Unfortunately the venue had a terrible wireless connection and packets were being dropped left right and centre causing havoc with out system, basically it wasn't able to work at all! Luckily we were able to switch to a different network and rescue the demo. This highlighted something that I knew was already an issue but hadn't appreciated quite how much of an issue it could be. Our system relies heavily on BOSH and has a rather large JavaScript code base which now works rather well under good network conditions. However we need to make it work well under bad network conditions as well. Due to the way that XMPP works, a fire and forget system, it's not easy to tell if a message you sent, or were supposed to receive, was actually sent or received. For instance, we have an offer system, one user will send an offer to another over BOSH. When this message is received by the server a message is published to the offering users offers_sent PEP node and a similar message to the receiving users offers_received PEP node. While the sending user is able to tell if their offer was send (relatively) easily, if the notification to the receiving user is never received that user will never know it missed a message. A little about out JavaScript setup, it has 4 main layers: StropheJS An MVC framework for dealing with low level tasks and to build on top of An application layer which contains the app logic routes, controllers models etc. as well as a browser cache of the model data A UI layer that receives events and publishes events to and from the application layer One way to solve the missing messages issue would be to periodically check the PEP nodes for new data that the browser doesn't know about. If a new message was discovered the browsers cache would be invalidated and all new data would be requested from the server. I'm not sure this is the best way to go and it also doesn't cover all situations. We certainly don't want to get into the situation where we are sending messages to confirm the previous message was received at it's destination as this would double the network traffic. With the number of real time websites growing daily this is an issue that must have been encountered by other developers, it would be interesting to see how it's been solved by others. As far as I can see there are two situations in which messages go missing: On poor connections messages are not sent or received due to the packets being dropped Involving navigating between pages, a message is received by the browser but is not fully processed and stored in the local cache before the page is unloaded. Or a message is added to the send queue but never sent before the page is unloaded I suspect the hardest issue to solve will be number 2. Any thoughts on the subject would be much appreciated.

    Read the article

  • Java loop for a certain duration

    - by portoalet
    Hi, Is there a way I can do a for loop for a certain amount of time easily? (without measuring the time ourselves using System.currentTimeMillis() ?) I.e. I want to do something like this in Java: int x = 0; for( 2 minutes ) { System.out.println(x++); } Thanks

    Read the article

  • I am getting wrong Client Ip address.

    - by Vibin Jith
    I am running an ASP.NET application. The web server is located in the same system. In the code behind I just want to get the IP address of the requesting client. I am using this code Request.UserHostAddress But I am getting a wrong address: 127.0.0.1 My system IP address is 198.162.0.27.

    Read the article

  • Creating a tiled world with OpenGL

    - by Tamir
    Hello, I'm planning to create a tiled world with OpenGL, with slightly rotated tiles and houses and building in the world will be made of models. Can anybody suggest me what projection(Orthogonal, Perspective) should I use, and how to setup the View matrix(using OpenGL)? If you can't figure what style of world I'm planning to create, look at this game: http://www.youtube.com/watch?v=i6eYtLjFu-Y&feature=PlayList&p=00E63EDCF757EADF&index=2

    Read the article

  • Will these optimizations to my Ruby implementation of diff improve performance in a Rails app?

    - by grg-n-sox
    <tl;dr> In source version control diff patch generation, would it be worth it to use the optimizations listed at the very bottom of this writing (see <optimizations>) in my Ruby implementation of diff for making diff patches? </tl;dr> <introduction> I am programming something I have never done before and there might already be tools out there to do the exact thing I am programming but at this point I am having too much fun to care so I am still going to do it from scratch, even if there is a tool for this. So anyways, I am working on a Ruby on Rails app and need a certain feature. Basically I want each entry in a table of mine, let's say for example a table of video games, to have a stored chunk of text that represents a review or something of the sort for that table entry. However, I want this text to be both editable by any registered user and also keep track of different submissions in a version control system. The simplest solution I could think of is just implement a solution that keeps track of the text body and the diff patch history of different versions of the text body as objects in Ruby and then serialize it, preferably in human readable form (so I'll most likely use YAML for this) for editing if needed due to corruption by a software bug or a mistake is made by an admin doing some version editing. So at first I just tried to dive in head first into this feature to find that the problem of generating a diff patch is more difficult that I thought to do efficiently. So I did some research and came across some ideas. Some I have implemented already and some I have not. However, it all pretty much revolves around the longest common subsequence problem, as you would already know if you have already done anything with diff or diff-like features, and optimization the function that solves it. Currently I have it so it truncates the compared versions of the text body from the beginning and end until non-matching lines are found. Then it solves the problem using a comparison matrix, but instead of incrementing the value stored in a cell when it finds a matching line like in most longest common subsequence algorithms I have seen examples of, I increment when I have a non-matching line so as to calculate edit distance instead of longest common subsequence. Although as far as I can tell between the two approaches, they are essentially two sides of the same coin so either could be used to derive an answer. It then back-traces through the comparison matrix and notes when there was an incrementation and in which adjacent cell (West, Northwest, or North) to determine that line's diff entry and assumes all other lines to be unchanged. Normally I would leave it at that, but since this is going into a Rails environment and not just some stand-alone Ruby script, I started getting worried about needing to optimize at least enough so if a spammer that somehow knew how I implemented the version control system and knew my worst case scenario entry still wouldn't be able to hit the server that bad. After some searching and reading of research papers and articles through the internet, I've come across several that seem decent but all seem to have pros and cons and I am having a hard time deciding how well in this situation that the pros and cons balance out. So are the ones listed here worth it? I have listed them with known pros and cons. </introduction> <optimizations> Chop the compared sequences into multiple chucks of subsequences by splitting where lines are unchanged, and then truncating each section of unchanged lines at the beginning and end of each section. Then solve the edit distance of each subsequence. Pro: Changes the time increase as the changed area gets bigger from a quadratic increase to something more similar to a linear increase. Con: Figuring out where to split already seems like you have to solve edit distance except now you don't care how it is changed. Would be fine if this was solvable by a process closer to solving hamming distance but a single insertion would throw this off. Use a cryptographic hash function to both convert all sequence elements into integers and ensure uniqueness. Then solve the edit distance comparing the hash integers instead of the sequence elements themselves. Pro: The operation of comparing two integers is faster than the operation of comparing two strings, so a slight performance gain is received after every comparison, which can be a lot overall. Con: Using a cryptographic hash function takes time to convert all the sequence elements and may end up costing more time to do the conversion that you gain back from the integer comparisons. You could use the built in hash function for a string but that will not guarantee uniqueness. Use lazy evaluation to only calculate the three center-most diagonals of the comparison matrix and then only calculate additional diagonals as needed. And then also use this approach to possibly remove the need on some comparisons to compare all three adjacent cells as desribed here. Pro: Can turn an algorithm that always takes O(n * m) time and make it so only worst case scenario is that time, best case becomes practically linear, and average case is somewhere between the two. Con: It is an algorithm I've only seen implemented in functional programming languages and I am having a difficult time comprehending how to convert this into Ruby based on how it is described at the site linked to above. Make a C module and do the hard work at the native level in C and just make a Ruby wrapper for it so Ruby can make all the calls to it that it needs. Pro: I have to imagine that evaluating something like this in could be a LOT faster. Con: I have no idea how Rails handles apps with ruby code that has C extensions and it hurts the portability of the app. This is an optimization for after the solving of edit distance, but idea is to store additional combined diffs with the ones produced by each version to make a delta-tree data structure with the most recently made diff as the root node of the tree so getting to any version takes worst case time of O(log n) instead of O(n). Pro: Would make going back to an old version a lot faster. Con: It would mean every new commit, the delta-tree would get a new root node that will cost time to reorganize the delta-tree for an operation that will be carried out a lot more often than going back a version, not to mention the unlikelihood it will be an old version. </optimizations> So are these things worth the effort?

    Read the article

  • Error in facebook.dll facebooksdk

    - by Surendar Radhakrishnan
    I got the web application working with facebooksdk and when i deployed it...it is running fine for sometime and it is throwing the error like this... Server Error in '/' Application. Could not load file or assembly 'Facebook, Version=4.1.1.0, Culture=neutral, PublicKeyToken=58cb4f2111d1e6de' or one of its dependencies. Access is denied. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileLoadException: Could not load file or assembly 'Facebook, Version=4.1.1.0, Culture=neutral, PublicKeyToken=58cb4f2111d1e6de' or one of its dependencies. Access is denied. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Assembly Load Trace: The following information can be helpful to determine why the assembly 'Facebook, Version=4.1.1.0, Culture=neutral, PublicKeyToken=58cb4f2111d1e6de' could not be loaded. WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\Microsoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure logging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fusion!EnableLog]. Stack Trace: [FileLoadException: Could not load file or assembly 'Facebook, Version=4.1.1.0, Culture=neutral, PublicKeyToken=58cb4f2111d1e6de' or one of its dependencies. Access is denied.] Secured_Login.FacebookVerification() +0 System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) +25 System.Web.UI.Control.LoadRecursive() +71 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3048 Version Information: Microsoft .NET Framework Version:4.0.30319; ASP.NET Version:4.0.30319.1 i got this method in pageload protected void Page_Load(object sender, EventArgs e) { FacebookVerification(); } protected void FacebookVerification() { try { FacebookApp fbApp = new FacebookApp(); if (fbApp.Session != null) { dynamic myinfo = fbApp.Get("me"); String firstname = myinfo.first_name; String lastname = myinfo.last_name; lblFBStatus.Text = "you signed in as " + firstname + " " + lastname ; } else { lblFBStatus.Text = "Please sign in with facebook"; } } catch (Exception) { throw; } }

    Read the article

  • Best Practices / Patterns for Enterprise Protection/Remediation of SSNs (Social Security Numbers)

    - by Erik Neu
    I am interested in hearing about enterprise solutions for SSN handling. (I looked pretty hard for any pre-existing post on SO, including reviewing the terriffic SO automated "Related Questions" list, and did not find anything, so hopefully this is not a repeat.) First, I think it is important to enumerate the reasons systems/databases use SSNs: (note—these are reasons for de facto current state—I understand that many of them are not good reasons) Required for Interaction with External Entities. This is the most valid case—where external entities your system interfaces with require an SSN. This would typically be government, tax and financial. SSN is used to ensure system-wide uniqueness. SSN has become the default foreign key used internally within the enterprise, to perform cross-system joins. SSN is used for user authentication (e.g., log-on) The enterprise solution that seems optimum to me is to create a single SSN repository that is accessed by all applications needing to look up SSN info. This repository substitutes a globally unique, random 9-digit number (ASN) for the true SSN. I see many benefits to this approach. First of all, it is obviously highly backwards-compatible—all your systems "just" have to go through a major, synchronized, one-time data-cleansing exercise, where they replace the real SSN with the alternate ASN. Also, it is centralized, so it minimizes the scope for inspection and compliance. (Obviously, as a negative, it also creates a single point of failure.) This approach would solve issues 2 and 3, without ever requiring lookups to get the real SSN. For issue #1, authorized systems could provide an ASN, and be returned the real SSN. This would of course be done over secure connections, and the requesting systems would never persist the full SSN. Also, if the requesting system only needs the last 4 digits of the SSN, then that is all that would ever be passed. Issue #4 could be handled the same way as issue #1, though obviously the best thing would be to move away from having users supply an SSN for log-on. There are a couple of papers on this: UC Berkely: http://bit.ly/bdZPjQ Oracle Vault: bit.ly/cikbi1

    Read the article

  • Why does release mode affect on Thread.CurrentThread.CurrentCulture?

    - by satispunk
    Hi I have ASP .NET application. I set culture in Application_Begin request: System.Threading.Thread.CurrentThread.CurrentCulture = myCulture; System.Threading.Thread.CurrentThread.CurrentUICulture = myCulture; When I get current culture from page, it is myCulture in debug mode and it is "en-GB" in release mode. I don't understand why Release mode affects on Thread.CurrentThread.CurrentCulture I use VS2010 and .NET 4.0 Thanks

    Read the article

  • NHibernate (3.1.0.4000) NullReferenceException using Query<> and NHibernate Facility

    - by TigerShark
    I have a problem with NHibernate, I can't seem to find any solution for. In my project I have a simple entity (Batch), but whenever I try and run the following test, I get an exception. I've triede a couple of different ways to perform a similar query, but almost identical exception for all (it differs in which LINQ method being executed). The first test: [Test] public void QueryLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .FirstOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.FirstOrDefault(IQueryable`1 source) The second test: [Test] public void QueryLatestBatch2() { using (var session = SessionManager.OpenSession()) { var batch = session.Query<Batch>() .OrderBy(x => x.Executed) .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); } } The exception: System.NullReferenceException : Object reference not set to an instance of an object. at NHibernate.Linq.NhQueryProvider.PrepareQuery(Expression expression, ref IQuery query, ref NhLinqExpression nhQuery) at NHibernate.Linq.NhQueryProvider.Execute(Expression expression) at System.Linq.Queryable.SingleOrDefault(IQueryable`1 source) However, this one is passing (using QueryOver<): [Test] public void QueryOverLatestBatch() { using (var session = SessionManager.OpenSession()) { var batch = session.QueryOver<Batch>() .OrderBy(x => x.Executed).Asc .Take(1) .SingleOrDefault(); Assert.That(batch, Is.Not.Null); Assert.That(batch.Executed, Is.LessThan(DateTime.Now)); } } Using the QueryOver< API is not bad at all, but I'm just kind of baffled that the Query< API isn't working, which is kind of sad, since the First() operation is very concise, and our developers really enjoy LINQ. I really hope there is a solution to this, as it seems strange if these methods are failing such a simple test. EDIT I'm using Oracle 11g, my mappings are done with FluentNHibernate registered through Castle Windsor with the NHibernate Facility. As I wrote, the odd thing is that the query works perfectly with the QueryOver< API, but not through LINQ.

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • JPA One To Many Relationship Persistence Bug

    - by Brian
    Hey folks, I've got a really weird problem with a bi-directional relationship in jpa (hibernate implementation). A User is based in one Region, and a Region can contain many Users. So...relationship is as follows: Region object: @OneToMany(mappedBy = "region", fetch = FetchType.LAZY, cascade = CascadeType.ALL) public Set<User> getUsers() { return users; } public void setUsers(Set<User> users) { this.users = users; } User object: @ManyToOne(cascade = {CascadeType.PERSIST, CascadeType.MERGE}, fetch = FetchType.EAGER) @JoinColumn(name = "region_fk") public Region getRegion() { return region; } public void setRegion(Region region) { this.region = region; } So, the relationship as you can see above is Lazy on the region side, ie, I don't want the region to eager load all the users. Therefore, I have the following code within my DAO layer to add a user to an existing user to an existing region object... public User setRegionForUser(String username, Long regionId){ Region r = (Region) this.get(Region.class, regionId); User u = (User) this.get(User.class, username); u.setRegion(r); Set<User> users = r.getUsers(); users.add(u); System.out.println("The number of users in the set is: "+users.size()); r.setUsers(users); this.update(r); return (User)this.update(u); } The problem is, when I run a little unit test to add 5 users to my region object, I see that the region.getUsers() set always stays stuck at 1 object...somehow the set isn't getting added to. My unit test code is as follows: public void setUp(){ System.out.println("calling setup method"); Region r = (Region)ManagerFactory.getCountryAndRegionManager().get(Region.class, Long.valueOf("2")); for(int i = 0; i<loop; i++){ User u = new User(); u.setUsername("username_"+i); ManagerFactory.getUserManager().update(u); ManagerFactory.getUserManager().setRegionForUser("username_"+i, Long.valueOf("2")); } } public void tearDown(){ System.out.println("calling teardown method"); for(int i = 0; i<loop; i++){ ManagerFactory.getUserManager().deleteUser("username_"+i); } } public void testGetUsersForRegion(){ Set<User> totalUsers = ManagerFactory.getCountryAndRegionManager().getUsersInRegion(Long.valueOf("2")); System.out.println("Expecting 5, got: "+totalUsers.size()); this.assertEquals(5, totalUsers.size()); } So the test keeps failing saying there is only 1 user instead of the expected 5. Any ideas what I'm doing wrong? thanks very much, Brian

    Read the article

  • jboss server log

    - by user309281
    Hi I am having web application running in JBOSS AS 4.2.2. Observed that jboss server automatically shuts down, and the following exception is observed in server.log 14:20:38,048 INFO [Server] Runtime shutdown hook called, forceHalt: true 14:20:38,049 INFO [Server] JBoss SHUTDOWN: Undeploying all packages I want to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml, to hopefully get some more info when the server shuts down. Pls let me know how to enable TRACE for org.jboss.system.server.Server in jboss-log4j.xml Thanks in advance

    Read the article

  • how can i control shockwaveflash object BGColor with colordialogue in vb.net

    - by testkhan
    i have the following code Private Sub select_color_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles select_color.Click Dim ocolor As New ColorDialog ocolor.ShowDialog() Me.BackColor = ocolor.Color End Sub and it changes the background color of form to the color i select in colordialogue... now i want to change the BGColor of shockwaveobject in this way...however i can change the BGColor of shockwave object manually in toolbox but i want to change it by color dialogue how can i do that...

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Using Complex datatype with python SUDS client

    - by sachin
    hi, I am trying to call webservice from python client using SUDS. When I call a function with a complex data type as input parameter, it is not passed correctly, but complex data type is getting returned correctly froma webservice call. Webservice Type: Soap Binding 1.1 Document/Literal Webserver: Weblogic 10.3 Python Version: 2.6.5, SUDS version: 0.3.9 here is the code I am using: Python Client: from suds.client import Client url = 'http://192.168.1.3:7001/WebServiceSecurityOWSM-simple_ws-context-root/SimpleServicePort?WSDL' client = Client(url) print client #simple function with no operation on input... result = client.service.sopHello() print result result = client.service.add10(10) print result params = client.factory.create('paramBean') print params params.intval = 10 params.longval = 20 params.strval = 'string value' #print "params" print params try: result = client.service.printParamBean(params) print result except WebFault, e: print e try: result = client.service.modifyParamBean(params) print result except WebFault, e: print e print params webservice java class: package simple_ws; import javax.jws.Oneway; import javax.jws.WebMethod; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; public class SimpleService { public SimpleService() { } public void sopHello(int i) { System.out.println("sopHello: hello"); } public int add10(int i) { System.out.println("add10:"); return 10+i; } public void printParamBean(ParamBean pb) { System.out.println(pb); } public ParamBean modifyParamBean(ParamBean pb) { System.out.println(pb); pb.setIntval(pb.getIntval()+10); pb.setStrval(pb.getStrval()+"blah blah"); pb.setLongval(pb.getLongval()+200); return pb; } } and the bean Class: package simple_ws; public class ParamBean { int intval; String strval; long longval; public void setIntval(int intval) { this.intval = intval; } public int getIntval() { return intval; } public void setStrval(String strval) { this.strval = strval; } public String getStrval() { return strval; } public void setLongval(long longval) { this.longval = longval; } public long getLongval() { return longval; } public String toString() { String stri = "\nInt val:" +intval; String strstr = "\nstrval val:" +strval; String strl = "\nlong val:" +longval; return stri+strstr+strl; } } so, as issue is like this: on call: client.service.printParamBean(params) in python client, output on server side is: Int val:0 strval val:null long val:0 but on call: client.service.modifyParamBean(params) Client output is: (reply){ intval = 10 longval = 200 strval = "nullblah blah" } What am i doing wrong here??

    Read the article

  • difference equations in MATLAB - why the need to switch signs?

    - by jefflovejapan
    Perhaps this is more of a math question than a MATLAB one, not really sure. I'm using MATLAB to compute an economic model - the New Hybrid ISLM model - and there's a confusing step where the author switches the sign of the solution. First, the author declares symbolic variables and sets up a system of difference equations. Note that the suffixes "a" and "2t" both mean "time t+1", "2a" means "time t+2" and "t" means "time t": %% --------------------------[2] MODEL proc-----------------------------%% % Define endogenous vars ('a' denotes t+1 values) syms y2a pi2a ya pia va y2t pi2t yt pit vt ; % Monetary policy rule ia = q1*ya+q2*pia; % ia = q1*(ya-yt)+q2*pia; %%option speed limit policy % Model equations IS = rho*y2a+(1-rho)yt-sigma(ia-pi2a)-ya; AS = beta*pi2a+(1-beta)*pit+alpha*ya-pia+va; dum1 = ya-y2t; dum2 = pia-pi2t; MPs = phi*vt-va; optcon = [IS ; AS ; dum1 ; dum2; MPs]; He then computes the matrix A: %% ------------------ [3] Linearization proc ------------------------%% % Differentiation xx = [y2a pi2a ya pia va y2t pi2t yt pit vt] ; % define vars jopt = jacobian(optcon,xx); % Define Linear Coefficients coef = eval(jopt); B = [ -coef(:,1:5) ] ; C = [ coef(:,6:10) ] ; % B[c(t+1) l(t+1) k(t+1) z(t+1)] = C[c(t) l(t) k(t) z(t)] A = inv(C)*B ; %(Linearized reduced form ) As far as I understand, this A is the solution to the system. It's the matrix that turns time t+1 and t+2 variables into t and t+1 variables (it's a forward-looking model). My question is essentially why is it necessary to reverse the signs of all the partial derivatives in B in order to get this solution? I'm talking about this step: B = [ -coef(:,1:5) ] ; Reversing the sign here obviously reverses the sign of every component of A, but I don't have a clear understanding of why it's necessary. My apologies if the question is unclear or if this isn't the best place to ask.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >