Search Results

Search found 12404 results on 497 pages for 'native types'.

Page 405/497 | < Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >

  • How to model parent to child pair in MySQL (SQL)

    - by mikeschuld
    I have a data model that includes element types Stage, Actor, and Form. Logically, Stages can be assigned pairs of ( Form <--- Actor ) which can be duplicated many times (i.e. same person and same form added to the same stage at a later date/time). Right now I am modeling this with these tables: Stage Form Actor Form_Actor _______________ |Id | |FormId | --> Id in Form |ActorId | --> Id in Actor Stage_FormActor __________________ |Id | |StageId | --> Id in Stage |FormActorId | --> Id in Form_Actor I am using CodeSmith to generate the data layer for this setup and none of the templates really know how to handle this type of relationship correctly when generating classes. Ideally, the ORM would have Stage.FormActors where FormActor would be the pair Form, Actor. Is this the correct way to model these relationships. I have tried using all three Ids in one table as well Stage_Form_Actor ______________ |Id | |StageId | --> Id in Stage |FormId | --> Id in Form |ActorId | --> Id in Actor This doesn't really get generated very well either. Ideas?

    Read the article

  • How to use a class's type as the type argument for an inherited collection property in C#

    - by Edelweiss Peimann
    I am trying to create a representation of various types of card that inherit from a generic card class and which all contain references to their owning decks. I tried re-declaring them, as suggested here, but it still won't convert to the specific card type. The code I currently have is as such: public class Deck<T> : List<T> where T : Card { void Shuffle() { throw new NotImplementedException("Shuffle not yet implemented."); } } public class Card { public Deck<Card> OwningDeck { get; set; } } public class FooCard : Card { public Deck<FooCard> OwningDeck { get { return (Deck<FooCard>)base.OwningDeck; } set { OwningDeck = value; } } } The compile-time error I am getting: Error 2 Cannot convert type 'Game.Cards.Deck' to 'Game.Cards.Deck' And a warning suggesting I use a new operator to specify that the hiding is intentional. Would doing so be a violation of convention? Is there a better way? My question to stackoverflow is this: Can what I am trying to do be done elegantly in the .NET type system? If so, can some examples be provided?

    Read the article

  • TSQL - make a literal float value

    - by David B
    I understand the host of issues in comparing floats, and lament their use in this case - but I'm not the table author and have only a small hurdle to climb... Someone has decided to use floats as you'd expect GUIDs to be used. I need to retrieve all the records with a specific float value. sp_help MyTable -- Column_name Type Computed Length Prec -- RandomGrouping float no 8 53 Here's my naive attempt: --yields no results SELECT RandomGrouping FROM MyTable WHERE RandomGrouping = 0.867153569942739 And here's an approximately working attempt: --yields 2 records SELECT RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.867153569942739 - 0.00000001 AND 0.867153569942739 + 0.00000001 -- 0.867153569942739 -- 0.867153569942739 In my naive attempt, is that literal a floating point literal? Or is it really a decimal literal that gets converted later? If my literal is not a floating point literal, what is the syntax for making a floating point literal? EDIT: Another possibility has occurred to me... it may be that a more precise number than is displayed is stored in this column. It may be impossible to create a literal that represents this number. I will accept answers that demonstrate that this is the case. EDIT: response to DVK. TSQL is MSSQLServer's dialect of SQL. This script works, and so equality can be performed deterministically between float types: DECLARE @X float SELECT top 1 @X = RandomGrouping FROM MyTable WHERE RandomGrouping BETWEEN 0.839110948199148 - 0.000000000001 AND 0.839110948199148 + 0.000000000001 --yields two records SELECT * FROM MyTable WHERE RandomGrouping = @X I said "approximately" because that method tests for a range. With that method I could get values that are not equal to my intended value. The linked article doesn't apply because I'm not (intentionally) trying to straddle the world boundaries between decimal and float. I'm trying to work with only floats. This isn't about the non-convertibility of decimals to floats.

    Read the article

  • Associating Models with Polymorphic

    - by Josh Crowder
    I am trying to associate Contacts with Classes but as two different types. Current_classes and Interested_classes. I know I need to enable polymorphic but I am not sure as to where it needs to be enabled. This is what I have at the moment class CreateClasses < ActiveRecord::Migration def self.up create_table :classes do |t| t.string :class_type t.string :class_name t.string :date t.timestamps end end def self.down drop_table :classes end end class CreateContactsInterestedClassesJoin < ActiveRecord::Migration def self.up create_table 'contacts_interested_classes', :id => false do |t| t.column 'class_id', :integer t.column 'contact_id', :integer end end def self.down drop_table 'contacts_interested_classes' end end class CreateContactsCurrentClassesJoin < ActiveRecord::Migration def self.up create_table 'contacts_current_classes', :id => false do |t| t.column 'class_id', :integer t.column 'contact_id', :integer end end def self.down drop_table 'contacts_current_classes' end end And then inside of my Contacts Model I want to have something like this. class Contact < ActiveRecord::Base has_and_belongs_to_many :classes, :join_table => "contacts_interested_classes", :foreign_key => "class_id" :as => 'interested_classes' has_and_belongs_to_many :classes, :join_table => "contacts_current_classes", :foreign_key => "class_id" :as => 'current_classes' end What am I doing wrong?

    Read the article

  • Java map with values limited by key's type parameter

    - by Ashley Mercer
    Is there a way in Java to have a map where the type parameter of a value is tied to the type parameter of a key? What I want to write is something like the following: public class Foo { // This declaration won't compile - what should it be? private static Map<Class<T>, T> defaultValues; // These two methods are just fine public static <T> void setDefaultValue(Class<T> clazz, T value) { defaultValues.put(clazz, value); } public static <T> T getDefaultValue(Class<T> clazz) { return defaultValues.get(clazz); } } That is, I can store any default value against a Class object, provided the value's type matches that of the Class object. I don't see why this shouldn't be allowed since I can ensure when setting/getting values that the types are correct. EDIT: Thanks to cletus for his answer. I don't actually need the type parameters on the map itself since I can ensure consistency in the methods which get/set values, even if it means using some slightly ugly casts.

    Read the article

  • Can someone please debug this Windows Azure Application?

    - by Vimvq1987
    Here's the myTODO project from codeplex: myTODO project I added any necessary libraries, added storage, changed obsolete types/methods, but everything went wrong when I debug it. An exception was thrown here (in TableStorage.cs): public IEnumerable<TElement> ExecuteWithRetries(RetryPolicy retry) { IEnumerable<TElement> ret = null; if (retry == null) { throw new ArgumentNullException("retry"); } retry(() => { try { ret = _query.Execute(); } catch (InvalidOperationException e) { if (TableStorageHelpers.CanBeRetried(e)) { throw new TableRetryWrapperException(e); } throw; } }); return ret; } I'm using Visual Studio 2008, SQL server 2008, Windows Azure SDK v1.1. Can anyone please debug this project for me, or suggest me someway to get it working. This request is urgent. Any helps are much appreciated. PS: If you can't download these file, please let me know, I'll upload to another hosts.

    Read the article

  • Are there solutions for streamlining the update of legacy code in multiple places?

    - by ccomet
    I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types. Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all. It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future. Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded. Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.

    Read the article

  • .net runtime type casting when using reflection

    - by Mike
    I have need to cast a generic list of a concrete type to a generic list of an interface that the concrete types implement. This interface list is a property on an object and I am assigning the value using reflection. I only know the value at runtime. Below is a simple code example of what I am trying to accomplish: public void EmployeeTest() { IList<Employee> initialStaff = new List<Employee> { new Employee("John Smith"), new Employee("Jane Doe") }; Company testCompany = new Company("Acme Inc"); //testCompany.Staff = initialStaff; PropertyInfo staffProperty = testCompany.GetType().GetProperty("Staff"); staffProperty.SetValue(testCompany, (staffProperty.PropertyType)initialStaff, null); } Classes are defined like so: public class Company { private string _name; public string Name { get { return _name; } set { _name = value; } } private IList<IEmployee> _staff; public IList<IEmployee> Staff { get { return _staff; } set { _staff = value; } } public Company(string name) { _name = name; } } public class Employee : IEmployee { private string _name; public string Name { get { return _name; } set { _name = value; } } public Employee(string name) { _name = name; } } public interface IEmployee { string Name { get; set; } } Any thoughts? I am using .NET 4.0. Would the new covariant or contravariant features help? Thanks in advance.

    Read the article

  • Constructor Overloading

    - by Mark Baker
    Normally when I want to create a class constructor that accepts different types of parameters, I'll use a kludgy overloading principle of not defining any args in the constructor definition: e.g. for an ECEF coordinate class constructor, I want it to accept either $x, $y and $z arguments, or to accept a single array argument containg x, y and z values, or to accept a single LatLong object I'd create a constructor looking something like: function __construct() { // Identify if any arguments have been passed to the constructor if (func_num_args() > 0) { $args = func_get_args(); // Identify the overload constructor required, based on the datatype of the first argument $argType = gettype($args[0]); switch($argType) { case 'array' : // Array of Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromArray'; break; case 'object' : // A LatLong object that needs converting to Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromLatLong'; break; default : // Individual Cartesian co-ordinate values $overloadConstructor = 'setCoordinatesFromXYZ'; break; } // Call the appropriate overload constructor call_user_func_array(array($this,$overloadConstructor),$args); } } // function __construct() I'm looking at an alternative: to provide a straight constructor with $x, $y and $z as defined arguments, and to provide static methods of createECEFfromArray() and createECEFfromLatLong() that handle all the necessary extraction of x, y and z; then create a new ECEF object using the standard constructor, and return that Which option is cleaner from an OO purists perspective?

    Read the article

  • REST API - why use PUT DELETE POST GET?

    - by Andre
    So -i was looking through some articles on creating REST API's. And some of them suggest using all types of HTTP requests: like PUT DELETE POST GET. So - we would create for example index.php and write API this way: $method = $_SERVER['REQUEST_METHOD']; $request = split("/", substr(@$_SERVER['PATH_INFO'], 1)); switch ($method) { case 'PUT': ....some put action.... break; case 'POST': ....some post action.... break; case 'GET': ....some get action.... break; case 'DELETE': ....some delete action.... break; } Ok - granted - I don't know much baout web services (yet). But - wouldn't it be easier to just accept JSON object through normal $_POST and then respond in JSON as well. We can easily serialize/deserialize via php's json_encode and json_decode and do whatever we want with that data without having to deal with different HTTP request methods... Am I missing something? UPDATE 1: Ok - after digging through various API's and learning a lot about XML-RPC, JSON-RPC, SOAP, REST I came to a conclusion that this type of API is sound. Actually stack exchange is pretty much using this approach on their sites and I do think that these people know what they are doing Stack Exchange API.

    Read the article

  • Simplest distributed persistent key/value store that supports primary key range queries

    - by StaxMan
    I am looking for a properly distributed (i.e. not just sharded) and persisted (not bounded by available memory on single node, or cluster of nodes) key/value ("nosql") store that does support range queries by primary key. So far closest such system is Cassandra, which does above. However, it adds support for other features that are not essential for me. So while I like it (and will consider using it of course), I am trying to figure out if there might be other mature projects that implement what I need. Specifically, for me the only aspect of value I need is to access it as a blob. For key, however, I need range queries (as in, access values ordered, limited by start and/or end values). While values can have structures, there is no need to use that structure for anything on server side (can do client-side data binding, flexible value/content types etc). For added bonus, Cassandra style storage (journaled, all sequential writes) seems quite optimal for my use case. To help filter out answers, I have investigated some alternatives within general domain like: Voldemort (key/value, but no ordering) and CouchDB (just sharded, more batch-oriented); and am aware of systems that are not quite distributed while otherwise qualifying (bdb variants, tokyo cabinet itself (not sure if Tyrant might qualify), redis (in-memory store only)).

    Read the article

  • Is ADO.NET Entity framework database schema update possible?

    - by fyasar
    I'm working on proof of concept application like crm and i need your some advice. My application's data layer completely dynamic and run onto EF 3.5. When the user update the entity, change relation or add new column to the database, first i'm planning make for these with custom classes. After I rebuild my database model layer with new changes during the application runtime. And my model layer tie with tightly coupled to my project for easy reflecting model layer changes (It connected to my project via interfaces and loading onto to application domain in the runtime). I need to create dynamic entities, create entity relations and modify them during the runtime after that i need to create change database script for updating database schema. I know ADO.NET team says "we will be able to provide this property in EF 4.0", but i don't need to wait for them. How can i update database changes during the runtime via EF 3.5 ? For example, i need to create new entity or need to change some entity schema, add new properties or change property types after than how can apply these changes on the physical database schema ? Any ideas ?

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • What is the best way to return result from business layer to presentation layer when using linq - I

    - by samsur
    I have a business layer that has DTOs that are used in the presentation layer. This application uses entity framework. Here is an example of a class called RoleDTO public class RoleDTO { public Guid RoleId { get; set; } public string RoleName { get; set; } public string RoleDescription { get; set; } public int? OrganizationId { get; set; } } In the BLL I want to have a method that returns a list of DTO.. I would like to know which is the better approach: returning IQueryable or list of DTOs. Although i feel that returning Iqueryable is not a good idea because the connection needs to be open. Here are the 2 different methods using the different approaches public class RoleBLL { private servicedeskEntities sde; public RoleBLL() { sde = new servicedeskEntities(); } public IQueryable<RoleDTO> GetAllRoles() { IQueryable<RoleDTO> role = from r in sde.Roles select new RoleDTO() { RoleId = r.RoleID, RoleName = r.RoleName, RoleDescription = r.RoleDescription, OrganizationId = r.OrganizationId }; return role; } Note: in the above method the datacontext is a private attribute and set in the constructor, so that the connection stays opened. Second approach public static List GetAllRoles() { List roleDTO = new List(); using (servicedeskEntities sde = new servicedeskEntities()) { var roles = from pri in sde.Roles select new { pri.RoleID, pri.RoleName, pri.RoleDescription }; //Add the role entites to the DTO list and return. This is necessary as anonymous types can be returned acrosss methods foreach (var item in roles) { RoleDTO roleItem = new RoleDTO(); roleItem.RoleId = item.RoleID; roleItem.RoleDescription = item.RoleDescription; roleItem.RoleName = item.RoleName; roleDTO.Add(roleItem); } return roleDTO; } Please let me know, if there is a better approach - Thanks,

    Read the article

  • How do I create a selection list in ASP.NET MVC?

    - by Gary McGill
    I have a database table that records what publications a user is allowed to access. The table is very simple - it simply stores user ID/publication ID pairs: CREATE TABLE UserPublication (UserId INTEGER, PublicationID INTEGER) The presence of a record for a given user & publication means that the user has access; absence of a record implies no access. I want to present my admin users with a simple screen that allows them to configure which publications a user can access. I would like to show one checkbox for each of the possible publications, and check the ones that the user can currently access. The admin user can then check or un-check any number of publications and submit the form. There are various publication types, and I want to group the similarly-typed publications together - so I do need control over how the publications are presented (I don't want to just have a flat list). My view model obviously needs to have a list of all the publications (since I need to display them all regardless of the current selection), and I also need a list of the publications that the user currently has access to. (I'm not sure whether I'd be better off with a single list where each item includes the publication ID and a yes/no field?). But that's as far as I've got. I've really no idea how to go about binding this to some checkboxes. Where do I start?

    Read the article

  • Cant' cast a class with multiple inheritance

    - by Jay S.
    I am trying to refactor some code while leaving existing functionality in tact. I'm having trouble casting a pointer to an object into a base interface and then getting the derived class out later. The program uses a factory object to create instances of these objects in certain cases. Here are some examples of the classes I'm working with. // This is the one I'm working with now that is causing all the trouble. // Some, but not all methods in NewAbstract and OldAbstract overlap, so I // used virtual inheritance. class MyObject : virtual public NewAbstract, virtual public OldAbstract { ... } // This is what it looked like before class MyObject : public OldAbstract { ... } // This is an example of most other classes that use the base interface class NormalObject : public ISerializable // The two abstract classes. They inherit from the same object. class NewAbstract : public ISerializable { ... } class OldAbstract : public ISerializable { ... } // A factory object used to create instances of ISerializable objects. template<class T> class Factory { public: ... virtual ISerializable* createObject() const { return static_cast<ISerializable*>(new T()); // current factory code } ... } This question has good information on what the different types of casting do, but it's not helping me figure out this situation. Using static_cast and regular casting give me error C2594: 'static_cast': ambiguous conversions from 'MyObject *' to 'ISerializable *'. Using dynamic_cast causes createObject() to return NULL. The NormalObject style classes and the old version of MyObject work with the existing static_cast in the factory. Is there a way to make this cast work? It seems like it should be possible.

    Read the article

  • Problem in filtering records using Dataview (C#3.0)

    - by Newbie
    I have a data table . The data table is basically getting populated from excel sheet. And there are many excel sheets. Henceforth, I have written a utility method for accomplishing the same. Now in some of the excel sheets, there are date columns and in some it is not(only text/string). My function is populating the values properly into the datatable from the excell sheet. But there are many blank rows in the excel sheets some are filled with NULL , some with " ". So I need to filter those records (which are NULL or " " ) first before further processing. What I am after is to use a dataview and apply the filter over there. DataView dv = dataTable.DefaultView; dv.RowFilter = ColumnName + " <> ''"; Well by using metedata (GetOleDbSchemaTable(OleDbSchemaGuid.Columns, restrection)) I was able to get the column names from the excel sheet , so getting the column names is not an issue. But the problem is as I said in some Excel sheet there are date fileds some are not. So the Filter condition of the Dataview needs to be proper. If I apply the above logic, and if it encounters a Datafield, it is throwing error Cannot perform '<' operation on System.DateTime and System.String. Could you people please help me out? I need to filter columns(not known at compile time + their data types) which can have NULL and " " I am using C#3.0 Thanks

    Read the article

  • Is there a safe / standard way to manage unstructured memory in C++?

    - by andand
    I'm building a toy VM that requires a block of memory for storing and accessing data elements of different types and of different sizes. I've done this by writing a wrapper class around a uint8_t[] data block of the needed size. That class has some template methods to write / read typed data elements to / from arbitrary locations in the memory block, both of which check to make certain the bounds aren't violated. These methods use memmove in what I hope is a more or less safe manner. That said, while I am willing to press on in this direction, I've got to believe that other with more expertise have been here before and might be willing to share their wisdom. In particular: 1) Is there a class in one of the C++ standards (past, present, future) that has been defined to perform a function similar to what I have outlined above? 2) If not, is there a (preferably free as in beer) library out there that does? 3) Short of that, besides bounds checking and the inevitable issue of writing one type to a memory location and reading a different from that location, are there other issues I should be aware of? Thanks.-&&

    Read the article

  • Unboxing to unknown type

    - by Robert
    I'm trying to figure out syntax that supports unboxing an integral type (short/int/long) to its intrinsic type, when the type itself is unknown. Here is a completely contrived example that demonstrates the concept: // Just a simple container that returns values as objects struct DataStruct { public short ShortVale; public int IntValue; public long LongValue; public object GetBoxedShortValue() { return LongValue; } public object GetBoxedIntValue() { return LongValue; } public object GetBoxedLongValue() { return LongValue; } } static void Main( string[] args ) { DataStruct data; // Initialize data - any value will do data.LongValue = data.IntValue = data.ShortVale = 42; DataStruct newData; // This works if you know the type you are expecting! newData.ShortVale = (short)data.GetBoxedShortValue(); newData.IntValue = (int)data.GetBoxedIntValue(); newData.LongValue = (long)data.GetBoxedLongValue(); // But what about when you don't know? newData.ShortVale = data.GetBoxedShortValue(); // error newData.IntValue = data.GetBoxedIntValue(); // error newData.LongValue = data.GetBoxedLongValue(); // error } In each case, the integral types are consistent, so there should be some form of syntax that says "the object contains a simple type of X, return that as X (even though I don't know what X is)". Because the objects ultimately come from the same source, there really can't be a mismatch (short != long). I apologize for the contrived example, it seemed like the best way to demonstrate the syntax. Thanks.

    Read the article

  • What's a good Java-based Master-Slave communication mechanism?

    - by plecong
    I'm creating a Java application that requires master-slave communication between JVMs, possibly residing on the same physical machine. There will be a "master" server running inside a JEE application server (i.e. JBoss) that will have "slave" clients connect to it and dynamically register itself for communication (that is the master will not know the IP addresses/ports of the slaves so cannot be configured in advance). The master server acts as a controller that will dole work out to the slaves and the slaves will periodically respond with notifications, so there would be bi-directional communication. I was originally thinking of RPC-based systems where each side would be a server, but it could get complicated, so I'd prefer a mechanism where there's an open socket and they talk back and forth. I'm looking for a communication mechanism that would be low-latency where the messages would be mostly primitive types, so no serious serialization is necessary. Here's what I've looked at: RMI JMS: Built-in to Java, the "slave" clients would connect to the existing ConnectionFactory in the application server. JAX-WS/RS: Both master and slave would be servers exposing an RPC interface for bi-directional communication. JGroups/Hazelcast: Use shared distributed data structures to facilitate communication. Memcached/MongoDB: Use these as "queues" to facilitate communication, though the clients would have to poll so there would be some latency. Thrift: This does seem to keep a persistent connection, but not sure how to integrate/embed a Thrift server into JBoss WebSocket/Raw Socket: This would work, but require a lot more custom code than I'd like. Is there any technology I'm missing? Edit: Also looked at: JMX: Have the client connect to JBoss' JMX server and receive JMX notifications for bidirectional comms.

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Delphi RTTI unable to find interface

    - by conciliator
    I'm trying to fetch an interface using D2010 RTTI. program rtti_sb_1; {$APPTYPE CONSOLE} {$M+} uses SysUtils, Rtti, mynamespace in 'mynamespace.pas'; var ctx: TRttiContext; RType: TRttiType; MyClass: TMyIntfClass; begin ctx := TRttiContext.Create; MyClass := TMyIntfClass.Create; // This prints a list of all known types, including some interfaces. // Unfortunately, IMyPrettyLittleInterface doesn't seem to be one of them. for RType in ctx.GetTypes do WriteLn(RType.Name); // Finding the class implementing the interface is easy. RType := ctx.FindType('mynamespace.TMyIntfClass'); // Finding the interface itself is not. RType := ctx.FindType('mynamespace.IMyPrettyLittleInterface'); MyClass.Free; ReadLn; end. Both IMyPrettyLittleInterface and TMyIntfClass = class(TInterfacedObject, IMyPrettyLittleInterface) are declared in mynamespace.pas. Do anyone know why this doesn't work? Is there a way to solve my problem? Thanks in advance!

    Read the article

  • Retrieving license type (linux/windows/windows+sqlserver) for an Amazon EC2 instance via the API?

    - by Geir
    I need to calculate the hourly running costs for my Amazon EC2 instances. This varies even between instances with same hardware configs (instance types) because I use different amazon images (AMIs): some plain windows server and some windows server with sql server (both of them have additional costs compared with plain linux instances) The EC2 Java API has a describeInstances() method which returns Instance objects with metadata such as instance id, instance type (m1.small/large...), state (running,stopped..) public ip, etc. This Instance object also has a .getLicense().getPool() which according to the Java API should return "The license pool from which this license was used (ex: 'windows')." I thought this is were it may also give 'windows+sqlserver' or something to that effect. The getLicense() method does however return null.. I've navigated around the EC2 web console, not being able to find this information, but I'm hoping that it is possible - otherwise it would mean that you cannot identify the true hourly cost of an particular instance unless you know which AMI was used to create it in the first place (plain windows server or windows server with sql server). Anyone? Thanks :) /Geir

    Read the article

  • Many to many table design question

    - by user169867
    Originally I had 2 tables in my DB, [Property] and [Employee]. Each employee can have 1 "Home Property" so the employee table has a HomePropertyID FK field to Property. Later I needed to model the situation where despite having only 1 "Home Property" the employee did work at or cover for multiple properties. So I created an [Employee2Property] table that has EmployeeID and PropertyID FK fields to model this many 2 many relationship. Now I find that I need to create other many-to-many relationships between employees and properties. For example if there are multiple employees that are managers for a property or multiple employees that perform maintenance work at a property, etc. My questions are: 1) Should I create seperate many-to-many tables for each of these situations or should I just create 1 more table like [PropertyAssociatonType] that lists the types of associations an emploee can have with a property and just add a FK field to [Employee2Property] such a PropertyAssociationTypeID that explains what the association is? I'm curious about the pros/cons or if there's another better way. 2) Am I stupid and going about this all worng? Thanks for any suggestions :)

    Read the article

< Previous Page | 401 402 403 404 405 406 407 408 409 410 411 412  | Next Page >