Search Results

Search found 11936 results on 478 pages for 'objects'.

Page 389/478 | < Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >

  • A simple WCF Service (POX) without complex serialization

    - by jammer59
    I'm a complete WCF novice. I'm trying to build a deploy a very, very simple IIS 7.0 hosted web service. For reasons outside of my control it must be WCF and not ASMX. It's a wrapper service for a pre-existing web application that simply does the following: 1) Receives a POST request with the request body XML-encapsulated form elements. Something like valuevalue. This is untyped XML and the XML is atomic (a form) and not a list of records/objects. 2) Add a couple of tags to the request XML and the invoke another HTTP-based service with a simple POST + bare XML -- this will actually be added by some internal SQL ops but that isn't the issue. 3) Receive the XML response from the 3rd party service and relay it as the response to the original calling client in Step 1. The clients (step 1) will be some sort of web-based scripting but could be anything .aspx, python, php, etc. I can't have SOAP and the usual WCF-based REST examples with their contracts and serialization have me confused. This seems like a very common and very simple problem conceptually. It would be easy to implement in code but IIS-hosted WCF is a requirement. Any pointers?

    Read the article

  • Using perl to split a line that may contain whitespace

    - by Tommy Fisk
    Okay, so I'm using perl to read in a file that contains some general configuration data. This data is organized into headers based on what they mean. An example follows: [vars] # This is how we define a variable! $var = 10; $str = "Hello thar!"; # This section contains flags which can be used to modify module behavior # All modules read this file and if they understand any of the flags, use them [flags] Verbose = true; # Notice the errant whitespace! [path] WinPath = default; # Keyword which loads the standard PATH as defined by the operating system. Append with additonal values. LinuxPath = default; Goal: Using the first line as an example "$var = 10;", I'd like to use the split function in perl to create an array that contains the characters "$var" and "10" as elements. Using another line as an example: Verbose = true; # Should become [Verbose, true] aka no whitespace is present This is needed because I will be outputting these values to a new file (which a different piece of C++ code will read) to instantiate dictionary objects. Just to give you a little taste of what it might look like (just making it up as I go along): define new dictionary name: [flags] # Start defining keys => values new key name: Verbose new value val: 10 # End dictionary Oh, and here is the code I currently have along with what it is doing (incorrectly): sub makeref($) { my @line = (split (/=/)); # Produces ["Verbose", " true"]; }

    Read the article

  • Filter a date property between a begin and end Dates with JDOQL

    - by Sergio del Amo
    I want to code a function to get a list of Entry objects whose date field is between a beginPeriod and endPeriod I post below a code snippet which works with a HACK. I have to substract a day from the begin period date. It seems the condition great or equal does not work. Any idea why I have this issue? public static List<Entry> getEntries(Date beginPeriod, Date endPeriod) { /* TODO * The great or equal condition does not seem to work in the filter below * Substract a day and it seems to work */ Calendar calendar = Calendar.getInstance(); calendar.set(beginPeriod.getYear(), beginPeriod.getMonth(), beginPeriod.getDate() - 1); beginPeriod = calendar.getTime(); PersistenceManager pm = JdoUtil.getPm(); Query q = pm.newQuery(Entry.class); q.setFilter("this.date >= beginPeriodParam && this.date <= endPeriodParam"); q.declareParameters("java.util.Date beginPeriodParam, java.util.Date endPeriodParam"); List<Entry> entries = (List<Entry>) q.execute(beginPeriod,endPeriod); return entries; }

    Read the article

  • Linq to SQL generates StackOverflowException in tight Insert loop

    - by ChrisW
    I'm parsing an XML file and inserting the rows into a table (and related tables) using LinqToSQL. I parse the XML file using LinqToXml into IEnumerable. Then, I create a foreach loop, where I build my LinqToSQL objects and call InsertOnSubmit and SubmitChanges at the end of each loop. Nothing special here. Usually, I make it through around 4,100 records before receiving a StackOverflowException from LinqToSql, right as I call SubmitChanges. It's not always on 4,100... sometimes it's 4102, sometimes, less, etc. I've tried inserting the records that generate the failure individually, but putting them in their own Xml file, but that inserts fine... so it's not the data. I'm running the whole process from an MVC2 app that is uploading the Xml file to the server. I've adjusted my WebRequest timeouts to appropriate values, and again, I'm not getting timeout errors, just StackOverflowExceptions. So is there some pattern that I should follow for times when I have to do many insertions into the database? I never encounter this exception on smaller Xml files, just larger ones.

    Read the article

  • Qt Jambi: Accessing the content of QNetworkReply

    - by Richard
    Hi All, I'm having trouble accessing the content of QNetworkReply objects. Content appears to be empty or zero. According to the docs (translating from c++ to java) I think I've got this set up correctly, but to no avail. Additionally an "Unknown error" is being reported. Any ideas much appreciated. Code: public class Test extends QObject { private QWebPage page; public Test() { page = new QWebPage(); QNetworkAccessManager nac = new QNetworkAccessManager(); nac.finished.connect(this, "requestFinished(QNetworkReply)"); page.setNetworkAccessManager(nac); page.loadProgress.connect(this, "loadProgress(int)"); page.loadFinished.connect(this, "loadFinished()"); } public void requestFinished(QNetworkReply reply) { reply.reset(); reply.open(OpenModeFlag.ReadOnly); reply.readyRead.connect(this, "ready()"); // never gets called System.out.println("bytes: " + reply.url().toString()); // writes out asset uri no problem System.out.println("bytes: " + reply.bytesToWrite()); // 0 System.out.println("At end: " + reply.atEnd()); // true System.out.println("Error: " + reply.errorString()); // "Unknown error" } public void loadProgress(int progress) { System.out.println("Loaded " + progress + "%"); } public void loadFinished() { System.out.println("Done"); } public void ready() { System.out.println("Ready"); } public void open(String url) { page.mainFrame().load(new QUrl(url)); } public static void main(String[] args) { QApplication.initialize(new String[] { }); Test t = new Test(); t.open("http://news.bbc.co.uk"); QApplication.exec(); } }

    Read the article

  • List.Sort in C#: comparer being called with null object

    - by cbp
    I am getting strange behaviour using the built-in C# List.Sort function with a custom comparer. For some reason it sometimes calls the comparer class's Compare method with a null object as one of the parameters. But if I check the list with the debugger there are no null objects in the collection. My comparer class looks like this: public class DelegateToComparer<T> : IComparer<T> { private readonly Func<T,T,int> _comparer; public int Compare(T x, T y) { return _comparer(x, y); } public DelegateToComparer(Func<T, T, int> comparer) { _comparer = comparer; } } This allows a delegate to be passed to the List.Sort method, like this: mylist.Sort(new DelegateToComparer<MyClass>( (x, y) => { return x.SomeProp.CompareTo(y.SomeProp); }); So the above delegate will throw a null reference exception for the x parameter, even though no elements of mylist are null. UPDATE: Yes I am absolutely sure that it is parameter x throwing the null reference exception! UPDATE: Instead of using the framework's List.Sort method, I tried a custom sort method (i.e. new BubbleSort().Sort(mylist)) and the problem went away. As I suspected, the List.Sort method passes null to the comparer for some reason.

    Read the article

  • instantiate python object within a c function called via ctypes

    - by gwk
    My embedded Python 3.3 program segfaults when I instantiate python objects from a c function called by ctypes. After setting up the interpreter, I can successfully instantiate a python Int (as well as a custom c extension type) from c main: #import <Python/Python.h> #define LOGPY(x) \ { fprintf(stderr, "%s: ", #x); PyObject_Print((PyObject*)(x), stderr, 0); fputc('\n', stderr); } // c function to be called from python script via ctypes. void instantiate() { PyObject* instance = PyObject_CallObject((PyObject*)&PyLong_Type, NULL); LOGPY(instance); } int main(int argc, char* argv[]) { Py_Initialize(); instantiate(); // works fine // run a script that calls instantiate() via ctypes. FILE* scriptFile = fopen("emb.py", "r"); if (!scriptFile) { fprintf(stderr, "ERROR: cannot open script file\n"); return 1; } PyRun_SimpleFileEx(scriptFile, scriptPath, 1); // close on completion return 0; } I then run a python script using PyRun_SimpleFileEx. It appears to run just fine, but when it calls instantiate() via ctypes, the program segfaults inside PyObject_CallObject: import ctypes as ct dy = ct.CDLL('./emb') dy.instantiate() # segfaults lldb output: instance: 0 Process 52068 stopped * thread #1: tid = 0x1c03, 0x000000010000d3f5 Python`PyObject_Call + 69, stop reason = EXC_BAD_ACCESS (code=1, address=0x18) frame #0: 0x000000010000d3f5 Python`PyObject_Call + 69 Python`PyObject_Call + 69: -> 0x10000d3f5: movl 24(%rax), %edx 0x10000d3f8: incl %edx 0x10000d3fa: movl %edx, 24(%rax) 0x10000d3fd: leaq 2069148(%rip), %rax ; _Py_CheckRecursionLimit (lldb) bt * thread #1: tid = 0x1c03, 0x000000010000d3f5 Python`PyObject_Call + 69, stop reason = EXC_BAD_ACCESS (code=1, address=0x18) frame #0: 0x000000010000d3f5 Python`PyObject_Call + 69 frame #1: 0x00000001000d5197 Python`PyEval_CallObjectWithKeywords + 87 frame #2: 0x0000000201100d8e emb`instantiate + 30 at emb.c:9 Why does the call to instantiate() fail from ctypes only? The function only crashes when it calls into the python lib, so perhaps some interpreter state is getting munged by the ctypes FFI call?

    Read the article

  • how to manage a "resource" array efficiently

    - by Haiyuan Zhang
    The senario of my question is that one need to use a fixed size of array to keep track of certain number of "objects" . The object here can be as simply as a integer or as complex as very fancy data structure. And "keep track" here means to allocate one object when other part of the app need one instance of object and recyle it for future allocation when one instance of the object is returned .Finally ,let me use c++ to put my problme in a more descriptive way . #define MAX 65535 /* 65535 just indicate that many items should be handled . performance demanding! */ typedef struct { int item ; }Item_t; Item_t items[MAX] ; class itemManager { private : /* up to you.... */ public : int get() ; /* get one index to a free Item_t in items */ bool put(int index) ; /* recyle one Item_t indicate by one index in items */ } how will you implement the two public functions of itemManager ? it's up to you to add any private member .

    Read the article

  • Is it okay to implement reference counting through composition?

    - by Billy ONeal
    Most common re-usable reference counted objects use private inheritance to implement re-use. I'm not a huge fan of private inheritance, and I'm curious if this is an acceptable way of handling things: class ReferenceCounter { std::size_t * referenceCount; public: ReferenceCounter() : referenceCount(NULL) {}; ReferenceCounter(ReferenceCounter& other) : referenceCount(other.referenceCount) { if (!referenceCount) { referenceCount = new std::size_t(1); other.referenceCount = referenceCount; } else { ++(*referenceCount); } }; ReferenceCounter& operator=(const ReferenceCounter& other) { ReferenceCounter temp(other); swap(temp); return *this; }; void swap(ReferenceCounter& other) { std::swap(referenceCount, other.referenceCount); }; ~ReferenceCounter() { if (referenceCount) { --(*referenceCount); if (!*referenceCount) delete referenceCount; } }; operator bool() const { return referenceCount && (*referenceCount != 0); }; }; class SomeClientClass { HANDLE someHandleThingy; ReferenceCounter objectsStillActive; public: SomeClientClass() { //Construct handle thingy } ~SomeClientClass() { if (objectsStillActive) return; //Release resources }; }; or are there subtle problems with this I'm not seeing?

    Read the article

  • Generic list typecasting problem

    - by AJ
    Hello, I'm new to C# and am stuck on the following. I have a Silverlight web service that uses LINQ to query a ADO.NET entity object. e.g.: [OperationContract] public List<Customer> GetData() { using (TestEntities ctx = new TestEntities()) { var data = from rec in ctx.Customer select rec; return data.ToList(); } } This works fine, but what I want to do is to make this more abstract. The first step would be to return a List<EntityObject> but this gives a compiler error, e.g.: [OperationContract] public List<EntityObject> GetData() { using (TestEntities ctx = new TestEntities()) { var data = from rec in ctx.Customer select rec; return data.ToList(); } } The error is: Error 1 Cannot implicitly convert type 'System.Collections.Generic.List<SilverlightTest.Web.Customer>' to 'System.Collections.Generic.IEnumerable<System.Data.Objects.DataClasses.EntityObject>'. An explicit conversion exists (are you missing a cast?) What am i doing wrong? Thanks, AJ

    Read the article

  • Problem with SQLite related nUnit-tests after upgrade to VS2010 and Re#5

    - by stiank81
    After converting to Visual Studio 2010 with ReSharper5 some of my unit tests started failing. More specifically this applies to all unit tests that use NHibernate with SQLite. The problem seem to be related to SQLite somehow. The unit tests that does not involve NHibernate and SQLite are still running fine. The exception is as follows: NHibernate.HibernateException : Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. ----> System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation. ----> NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The exception is the NullReferenceException on TearDown when cleaning up NHibernate objects that wasn't successfully created, but the problem seem to be related to SQLite somehow. I run my unit tests through ReSharper, but I get the same exception when running them directly through the NUnit.exe application. However, running them through the x86 variant (NUnit-x86.exe) all tests run fine. Can it be related to some mixing of 64bit and 32bit dlls? It still runs fine through VS2008 + ReSharper4.5. Note that the target framework of my projects still is .NET3.5. Anyone seen this problem before?

    Read the article

  • WCF Service instead of ASMX Web Service?

    - by wchrisjohnson
    I'm writing a SOAP Server that will act as an endpoint for an external client. The external client expects SOAP 1.1. I'll be taking embedded business objects in the SOAP messages and passing them to an internal application, getting responses back and responding with SOAP messages to the eternal client. I did the traditional ASMX based web services several years ago. Now, I've been exploring WCF Services and wondering the best approach to take. 1) Should WCF be considered a superset of ASMX web services? 2) Is there any reason to still write new web services using ASMX instead of WCF? 3) Does WCF provide better facilities for working with SOAP messages, as opposed to SOAP Extensions? 4) Can I restrict communication to SOAP 1.1 using WCF, the way I can with a web.config change in ASMX? 5) Does WCF have an easy way to log or review the requests that hit the service without resorting to something like SOAP extensions? Sorry my questions are not very specific; still trying to get handle on what I need to know... Using VS2008, Windows Server 2008. Chris

    Read the article

  • Need some help/advice on WCF Per-Call Service and NServiceBus interop.

    - by Alexey
    I have WCF Per-Call service wich provides data for clients and at the same time is integrated with NServiceBus. All statefull objects are stored in UnityContainer wich is integrated into custom service host. NServiceBus is configured in service host and uses same container as service instances. Every client has its own instance context(described by Juval Lowy in his book in chapter about Durable Services). If i need to send request over bus I just use some kind of dispatcher and wait response using Thread.Sleep().Since services are per-call this is ok afaik. But I am confused a bit about messages from bus, that service must handle and provide them to clients. For some data like stock quotes I just update some kind of statefull object and and then, when clients invoke GetQuotesData() just provide data from this object. But there are numerous service messages like new quote added and etc. At this moment I have an idea to implement something like "Postman daemon" =)) and store this type of messages in instance context. Then client will invoke "GetMail()",recieve those messages and parse them. Problem is that NServiceBus messages are "Interface based" and I cant pass them over WCF, so I need to convert them to types derieved from some abstract class. Dunno what is best way to handle this situation. Will be very gratefull for any advice on this. Thanks in advance.

    Read the article

  • Why not lump all service classes into a Factory method (instead of injecting interfaces)?

    - by Andrew
    We are building an ASP.NET project, and encapsulating all of our business logic in service classes. Some is in the domain objects, but generally those are rather anemic (due to the ORM we are using, that won't change). To better enable unit testing, we define interfaces for each service and utilize D.I.. E.g. here are a couple of the interfaces: IEmployeeService IDepartmentService IOrderService ... All of the methods in these services are basically groups of tasks, and the classes contain no private member variables (other than references to the dependent services). Before we worried about Unit Testing, we'd just declare all these classes as static and have them call each other directly. Now we'll set up the class like this if the service depends on other services: public EmployeeService : IEmployeeService { private readonly IOrderService _orderSvc; private readonly IDepartmentService _deptSvc; private readonly IEmployeeRepository _empRep; public EmployeeService(IOrderService orderSvc , IDepartmentService deptSvc , IEmployeeRepository empRep) { _orderSvc = orderSvc; _deptSvc = deptSvc; _empRep = empRep; } //methods down here } This really isn't usually a problem, but I wonder why not set up a factory class that we pass around instead? i.e. public ServiceFactory { virtual IEmployeeService GetEmployeeService(); virtual IDepartmentService GetDepartmentService(); virtual IOrderService GetOrderService(); } Then instead of calling: _orderSvc.CalcOrderTotal(orderId) we'd call _svcFactory.GetOrderService.CalcOrderTotal(orderid) What's the downfall of this method? It's still testable, it still allows us to use D.I. (and handle external dependencies like database contexts and e-mail senders via D.I. within and outside the factory), and it eliminates a lot of D.I. setup and consolidates dependencies more. Thanks for your thoughts!

    Read the article

  • Dependency Injection and decoupling of software layers

    - by cs31415
    I am trying to implement Dependency Injection to make my app tester friendly. I have a rather basic doubt. Data layer uses SqlConnection object to connect to a SQL server database. SqlConnection object is a dependency for data access layer. In accordance with the laws of dependency injection, we must not new() dependent objects, but rather accept them through constructor arguments. Not wanting to upset the DI gods, I dutifully create a constructor in my DAL that takes in SqlConnection. Business layer calls DAL. Business layer must therefore, pass in SqlConnection. Presentation layer calls Business layer. Hence it too, must pass in SqlConnection to business layer. This is great for class isolation and testability. But didn't we just couple the UI and Business layers to a specific implementation of the data layer which happens to use a relational database? Why do the Presentation and Business layers need to know that the underlying data store is SQL? What if the app needs to support multiple data sources other than SQL server (such as XML files, Comma delimited files etc.) Furthermore, what if I add another object upon which my data layer is dependent on (say, a second database). Now, I have to modify the upper layers to pass in this new object. How can I avoid this merry-go-round and reap all the benefits of DI without the pain?

    Read the article

  • Hibernate mapping - "Could not determine type"

    - by Pool
    I currently have the following objects persisting successfully: Person first name, etc. Exams title, date, etc. I'd like to now create a third table Exam results. For this table I believe it should be person ID, exam ID and result, and this is a many to many relationship. @Entity public class ExamResult { private Exam exam; private Person person; private double value; @Id @ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE} ) @JoinColumn(name="EXAM_ID") public Exam getExam() { return exam; } public void setExam(Exam exam) { this.exam = exam; } @Id @ManyToOne( cascade = {CascadeType.PERSIST, CascadeType.MERGE} ) @JoinColumn(name="PERSON_ID") public Person getPerson() { return person; } public void setPerson(Person person) { this.person = person; } public double getValue() { return value; } public void setValue(double value) { this.value = value; } } The error: org.hibernate.MappingException: Could not determine type for: Person, at table: ExamResult, for columns: [org.hibernate.mapping.Column(person)] I think I may be going about this the wrong way, but I can't work out how to proceed with this relationship from the tutorial. Any ideas?

    Read the article

  • AS3 httpservice - pass arguments to event handlers by reference

    - by Shawn Simon
    I have this code: var service:HTTPService = new HTTPService(); if (search.Location && search.Location.length > 0 && chkLocalSearch.selected) { service.url = 'http://ajax.googleapis.com/ajax/services/search/local'; service.request.q = search.Keyword; service.request.near = search.Location; } else { service.url = 'http://ajax.googleapis.com/ajax/services/search/web'; service.request.q = search.Keyword + " " + search.Location; } service.request.v = '1.0'; service.resultFormat = 'text'; service.addEventListener(ResultEvent.RESULT, onServerResponse); service.send(); I want to pass the search object to the result method (onServerResponse) but if I do it in a closure it gets passed by value. Is there anyway to do it by reference without searching through my array of search objects for the value returned in the result? Sorry, this is simple but it's been a long day...

    Read the article

  • AS3 Memory Conservation (Loaders/BitmapDatas/Bitmaps/Sprites)

    - by rinogo
    I'm working on reducing the memory requirements of my AS3 app. I understand that once there are no remaining references to an object, it is flagged as being a candidate for garbage collection. Is it even worth it to try to remove references to Loaders that are no longer actively in use? My first thought is that it is not worth it. Here's why: My Sprites need perpetual references to the Bitmaps they display (since the Sprites are always visible in my app). So, the Bitmaps cannot be garbage collected. The Bitmaps rely upon BitmapData objects for their data, so we can't get rid of them. (Up until this point it's all pretty straightforward). Here's where I'm unsure of what's going on: Does a BitmapData have a reference to the data loaded by the Loader? In other words, is BitmapData essentially just a wrapper that has a reference to loader.content, or is the data copied from loader.content to BitmapData? If a reference is maintained, then I don't get anything by garbage collecting my loaders... Thoughts?

    Read the article

  • XamlReader.Parse throws exception on empty String

    - by sub-jp
    In our app, we need to save properties of objects to the same database table regardless of the type of object, in the form of propertyName, propertyValue, propertyType. We decided to use XamlWriter to save all of the given object's properties. We then use XamlReader to load up the XAML that was created, and turn it back into the value for the property. This works fine for the most part, except for empty strings. The XamlWriter will save an empty string as below. <String xmlns="clr-namespace:System;assembly=mscorlib" xml:space="preserve" /> The XamlReader sees this string and tries to create a string, but can't find an empty constructor in the String object to use, so it throws a ParserException. The only workaround that I can think of is to not actually save the property if it is an empty string. Then, as I load up the properties, I can check for which ones did not exist, which means they would have been empty strings. Is there some workaround for this, or is there even a better way of doing this?

    Read the article

  • How to route tree-structured URLs with ASP.NET Routing?

    - by Venemo
    Hello Everyone, I would like to achieve something very similar to this question, with some enhancements. There is an ASP.NET MVC web application. I have a tree of entities. For example, a Page class which has a property called Children, which is of type IList<Page>. (An instance of the Page class corresponds to a row in a database.) I would like to assign a unique URL to every Page in the database. I handle Page objects with a Controller called PageController. Example URLs: http://mysite.com/Page1/ http://mysite.com/Page1/SubPage/ http://mysite.com/Page/ChildPage/GrandChildPage/ You get the picture. So, I'd like every single Page object to have its own URL that is equal to its parent's URL plus its own name. In addition to that, I also would like the ability to map a single Page to the / (root) URL. I would like to apply these rules: If a URL can be handled with any other route, or a file exists in the filesystem in the specified URL, let the default URL mapping happen If a URL can be handled by the virtual path provider, let that handle it If there is no other, map the other URLs to the PageController class I also found this question, and also this one and this one, but they weren't of much help, since they don't provide an explanation about my first two points. I see the following possible soutions: Map a route for each page invidually. This requires me to go over the entire tree when the application starts, and adding an exact match route to the end of the route table. I could add a route with {*path} and write a custom IRouteHandler that handles it, but I can't see how could I deal with the first two rules then, since this handler would get to handle everything. So far, the first solution seems to be the right one, because it is also the simplest. I would really appreciate your thoughts on this. Thank you in advance!

    Read the article

  • MapKit custom annotations being added to map, but are not visible on the map

    - by culov
    I learned a lot from the screencast at http://icodeblog.com/2009/12/21/introduction-to-mapkit-in-iphone-os-3-0/ , and ive been trying to incorporate the way he creates a custom annotation into my iphone app. He is hardcoding annotations and adding them individually to the map, whereas i want to add them from my data source. So i have an array of objects with a lat and a lng (ive checked that the values are within range and ought to be appearing on the map) that i iterate through and do [mapView addAnnotation:truck] once this process is completed, i check the number of annotations on the map with [[mapView annotations] count] and its equal to the number it ought to be, so all the annotations are getting added onto the mapView, but for some reason I cant see any annotations in the simulator. I've compared my code with his in all the pertinent places many times, but nothing seems to stand out as being incorrect. Ive also reviewed my code for the last several hours trying to find a point where I do something wrong, but nothing is coming to mind. The images are named just as they are assigned in the custom AnnotationView, the loadAnnotation function is done properly, etc... i dont know what it could be. so i suppose if i could have the answer to one question it would be, what are possible causes for a mapView to contain several annotations, but to not show any on the map? Thanks

    Read the article

  • How do I get LongVarchar out param from SPROC in ADO.NET 2.0 with SQLAnywhere 10?

    - by todthomson
    Hi All, I have sproc 'up_selfassessform_view' which has the following parameters: in ai_eqidentkey SYSKEY in ai_acidentkey SYSKEY out as_eqcomments TEXT_STRING out as_acexplanation TEXT_STRING  -  which are domain objects - SYSKEY is 'integer' and TEXT_STRING is 'long varchar'. I can call the sproc fine from iSQL using the following code: create variable @eqcomments TEXT_STRING; create variable @acexamples TEXT_STRING; call up_selfassessform_view (75000146, 3, @eqcomments, @acexamples); select @eqcomments, @acexamples;  - which returns the correct values from the DB (so I know the SPROC is good). I have configured the out param in ADO.NET like so (which has worked up until now for 'integer', 'timestamp', 'varchar(255)', etc): SAParameter as_acexplanation = cmd.CreateParameter(); as_acexplanation.Direction = ParameterDirection.Output; as_acexplanation.ParameterName = "as_acexplanation"; as_acexplanation.SADbType = SADbType.LongVarchar; cmd.Parameters.Add(as_acexplanation); When I run the following code: SADataReader reader = cmd.ExecuteReader(); I receive the following error: Parameter[2]: the Size property has an invalid size of 0. Which (I suppose) makes sense... But the thing is, I don't know the size of the field (it's just "long varchar" it doesn't have a predetermined length - unlike varchar(XXX)). Anyhow, just for fun, I add the following: as_acexplanation.Size = 1000; and the above error goes away, but now when I call: as_acexplanation.Value i get back a string of length = 1000 which is just '\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0...' (\0 repeated 1000 times). So I'm really really stuck... Any help one this one would be much appreciated. Cheers! ;) Tod T.

    Read the article

  • NHibernate: Mapping multiple classes from a single table row

    - by Michael Kurtz
    I couldn't find an answer to this specific question. I am trying to keep my domain model object-oriented and re-use objects where possible. I am having an issue determining how to provide a mapping to multiple classes from a single row. Let me explain with an example: I have a single table, call it Customer. A customer has several attributes; but, for brevity, assume it has Id, Name, Address, City, State, ZipCode. I would like to create a Customer and Address class that look like this: public class Customer { public virtual long Id {get;set;} public virtual string Name {get;set;} public virtual Address Address {get;set;} } public class Address { public virtual string Address {get;set;} public virtual string City {get;set;} public virtual string State {get;set;} public virtual string ZipCode {get;set;} } What I am having trouble with is determining what the mapping would be for the Address class within the Customer class. There is no Address table and there isn't a "set" of addresses associated with a Customer. I just want a more object-oriented view of the Customer table in code. There are several other tables that have address information in them and it would be nice to have a reusable Address class to deal with them. Addresses are not shared so breaking all addresses into a separate table with foreign keys seems to be overkill and, actually, more painful since I would need foreign keys to multiple tables. Can someone enlighten me on this type of mapping? Please provide an example if you can. Thanks for any insights! -Mike

    Read the article

  • Entity framework memory leak after detaching newly created object

    - by Tom Peplow
    Hi, Here's a test: WeakReference ref1; WeakReference ref2; TestRepositoryEntitiesContainer context; int i = 0; using (context = GetContext<TestRepositoryEntitiesContainer>()) { context.ObjectMaterialized += (o, s) => i++; var item = context.SomeEntities.Where(e => e.SomePropertyToLookupOn == "some property").First(); context.Detach(item); ref1 = new WeakReference(item); var newItem = new SomeEntity {SomePropertyToLookupOn = "another value"}; context.SomeEntities.AddObject(newItem); ref2 = new WeakReference(newItem); context.SaveChanges(); context.SomeEntities.Detach(newItem); newItem = null; item = null; } context = null; GC.Collect(); Assert.IsFalse(ref1.IsAlive); Assert.IsFalse(ref2.IsAlive); First assert passes, second fails... I hope I'm missing something, it is late... But it appears that detaching a fetched item will actually release all handles on the object letting it be collected. However, for new objects something keeps a pointer and creates a memory leak. NB - this is EF 4.0 Anyone seen this before and worked around it? Thanks for your help! Tom

    Read the article

  • How to have a policy class implement a virtual function?

    - by dehmann
    I'm trying to design a policy-based class, where a certain interface is implemented by the policy itself, so the class derives from the policy, which itself is a template (I got this kind of thinking from Alexandrescu's book): #include <iostream> #include <vector> class TestInterface { public: virtual void test() = 0; }; class TestImpl1 { public: void test() {std::cerr << "Impl1" << std::endl;} }; template<class TestPolicy> class Foo : public TestInterface, TestPolicy { }; Then, in the main() function, I call test() on (potentially) various different objects that all implement the same interface: int main() { std::vector<TestInterface*> foos; foos.push_back(new Foo<TestImpl1>()); foos[0]->test(); delete foos[0]; return 0; } It doesn't compile, though, because the following virtual functions are pure within ‘Foo<TestImpl1>’: virtual void TestInterface::test() I thought TestInterface::test() is implemented because we derive from TestImpl1?

    Read the article

< Previous Page | 385 386 387 388 389 390 391 392 393 394 395 396  | Next Page >