Search Results

Search found 8301 results on 333 pages for 'types'.

Page 264/333 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Which fieldtype is best for storing PRICE values?

    - by BerggreenDK
    Hi there I am wondering whats the best "price field" in MSSQL for a shoplike structure? Looking at this overview: http://www.teratrax.com/sql_guide/data_types/sql_server_data_types.html We have datatypes called money, smallmoney, then we have decimal/numeric and lastly float and real Name, memory/disk-usage and value ranges: Money: 8 bytes (values: -922,337,203,685,477.5808 to +922,337,203,685,477.5807) Smallmoney: 4 bytes (values: -214,748.3648 to +214,748.3647) Decimal: 9 [default, min. 5] bytes (values: -10^38 +1 to 10^38 -1 ) Float: 8 bytes (values: -1.79E+308 to 1.79E+308 ) Real: 4 bytes (values: -3.40E+38 to 3.40E+38 ) My question is: is it really wise to store pricevalues in those types? what about eg. INT? Int: 4 bytes (values: -2,147,483,648 to 2,147,483,647) Lets say a shop uses dollars, they have cents, but I dont see prices being $49.2142342 so the use of a lot of decimals showing cents seems waste of SQL bandwidth. Secondly, most shops wouldn't show any prices near 200.000.000 (not in normal webshops at least... unless someone is trying to sell me a famous tower in Paris) So why not go for an int? An int is fast, its only 4 bytes and you can easily make decimals, by saving values in cents instead of dollars and then divide when you present the values. The other approach would be to use smallmoney which is 4 bytes too, but this will require the math part of the CPU to do the calc, where as Int is integer power... on the downside you will need to divide every single outcome. Are there any "currency" related problems with regionalsettings when using smallmoney/money fields? what will these transfer too in C#/.NET ? Any pros/cons? Go for integer prices or smallmoney or some other? Whats does your experience tell?

    Read the article

  • LINQ Joins - Performance

    - by Meiscooldude
    I am curious on how exactly LINQ (not LINQ to SQL) is performing is joins behind the scenes in relation to how Sql Server performs joins. Sql Server before executing a query, generates an Execution Plan. The Execution Plan is basically an Expression Tree on what it believes is the best way to execute the query. Each node provides information on whether to do a Sort, Scan, Select, Join, ect. On a 'Join' node in our execution plan, we can see three possible algorithms; Hash Join, Merge Join, and Nested Loops Join. Sql Server will choose which algorithm to for each Join operation based on expected number of rows in Inner and Outer tables, what type of join we are doing (some algorithms don't support all types of joins), whether we need data ordered, and probably many other factors. Join Algorithms: Nested Loop Join: Best for small inputs, can be optimized with ordered inner table. Merge Join: Best for medium to large inputs sorted inputs, or an output that needs to be ordered. Hash Join: Best for medium to large inputs, can be parallelized to scale linearly. LINQ Query: DataTable firstTable, secondTable; ... var rows = from firstRow in firstTable.AsEnumerable () join secondRow in secondTable.AsEnumerable () on firstRow.Field<object> (randomObject.Property) equals secondRow.Field<object> (randomObject.Property) select new {firstRow, secondRow}; SQL Query: SELECT * FROM firstTable fT INNER JOIN secondTable sT ON fT.Property = sT.Property Sql Server might use a Nested Loop Join if it knows there are a small number of rows from each table, a merge join if it knows one of the tables has an index, and Hash join if it knows there are a lot of rows on either table and neither has an index. Does Linq choose its algorithm for joins? or does it always use one?

    Read the article

  • Generic validate input data via regex. Input error when match.count == 0

    - by Valamas
    Hi, I have a number of types of data fields on an input form, for example, a web page. Some fields are like, must be an email address, must be a number, must be a number between, must have certain characters. Basically, the list is undefinable. I wish to come up with a generic way of validating the data inputed. I thought I would use regex to validate the data. The fields which need validation would be related to a "regex expression" and a "regex error message" stating what the field should contain. My current mock up has that when the match count is zero, that would signify an error and to display the message. While still a white belt regex designer I have come to understand that in certain situations that it is difficult to write a regex which results in a match count of zero for every case. A complex regex case I looked for help on was Link Here. The forum post was a disaster because I confused people helping me. But one of the statements said that it was difficult to make a regex with a match count of zero meaning the input data was invalid; that the regex was very difficult to write that for. Does anyone have comments or suggestions on this generic validation system I am trying to create? thanks

    Read the article

  • Scala: Correcting type inference of representation type over if statement

    - by drhagen
    This is a follow-up to two questions on representation types, which are type parameters of a trait designed to represent the type underlying a bounded type member (or something like that). I've had success creating instances of classes, e.g ConcreteGarage, that have instances cars of bounded type members CarType. trait Garage { type CarType <: Car[CarType] def cars: Seq[CarType] def copy(cars: Seq[CarType]): Garage def refuel(car: CarType, fuel: CarType#FuelType): Garage = copy( cars.map { case `car` => car.refuel(fuel) case other => other }) } class ConcreteGarage[C <: Car[C]](val cars: Seq[C]) extends Garage { type CarType = C def copy(cars: Seq[C]) = new ConcreteGarage(cars) } trait Car[C <: Car[C]] { type FuelType <: Fuel def fuel: FuelType def copy(fuel: C#FuelType): C def refuel(fuel: C#FuelType): C = copy(fuel) } class Ferrari(val fuel: Benzin) extends Car[Ferrari] { type FuelType = Benzin def copy(fuel: Benzin) = new Ferrari(fuel) } class Mustang(val fuel: Benzin) extends Car[Mustang] { type FuelType = Benzin def copy(fuel: Benzin) = new Mustang(fuel) } trait Fuel case class Benzin() extends Fuel I can easily create instances of Cars like Ferraris and Mustangs and put them into a ConcreteGarage, as long as it's simple: val newFerrari = new Ferrari(Benzin()) val newMustang = new Mustang(Benzin()) val ferrariGarage = new ConcreteGarage(Seq(newFerrari)) val mustangGarage = new ConcreteGarage(Seq(newMustang)) However, if I merely return one or the other, based on a flag, and try to put the result into a garage, it fails: val likesFord = true val new_car = if (likesFord) newFerrari else newMustang val switchedGarage = new ConcreteGarage(Seq(new_car)) // Fails here The switch alone works fine, it is the call to ConcreteGarage constructor that fails with the rather mystical error: error: inferred type arguments [this.Car[_ >: this.Ferrari with this.Mustang <: this.Car[_ >: this.Ferrari with this.Mustang <: ScalaObject]{def fuel: this.Benzin; type FuelType<: this.Benzin}]{def fuel: this.Benzin; type FuelType<: this.Benzin}] do not conform to class ConcreteGarage's type parameter bounds [C <: this.Car[C]] val switchedGarage = new ConcreteGarage(Seq(new_car)) // Fails here ^ I have tried putting those magic [C <: Car[C]] representation type parameters everywhere, but without success in finding the magic spot.

    Read the article

  • Query String to Object with strongly typed properties

    - by Kamar
    Let’s say we track 20 query string parameters in our site. Each request which comes will have only a subset of those 20 parameters. But we definitely look for all/most of the parameters which comes in each request. We do not want to loop through the collection each time we are looking for a particular parameter initially or somewhere down the pipeline in the code. So we loop once through the query string collection, convert string values to their respective types (enums, int, string etc.), populate to QueryString object which is added to the context. After that wherever its needed we will have a strongly typed properties in the QueryString object which is easy to use and we maintain a standard. public class QueryString { public int Key1{ get; private set; } public SomeType Key2{ get; private set; } private QueryString() { } public static QueryString GetQueryString() { QueryString l_QS = new QueryString(); foreach (string l_Key in HttpContext.Current.Request.QueryString.AllKeys) { switch (l_Key) { case "key1": l_QS.Key1= DoSomething(l_Key, HttpContext.Current.Request.QueryString[l_Key]); break; case "key2": l_QS.Key2 = DoAnotherThing(l_Key, HttpContext.Current.Request.QueryString[l_Key]); break; } } return l_QS; } } Any other solution to achieve this?

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • JQuery Tabbed Nav Menu and PHP Forms Question?

    - by SlAcKeR
    I'm using a JQuery Tabbed Menu which holds different types of forms and when I select a different form located under a different tab and submit the form the tab will jump to the default tab instead of the current tab the form is located in. I was wondering how would I go about fixing this so that when the form is submitted the current tab is still selected, is it the JQuery or PHP problem? Here is the JQuery. $(document).ready(function() { //When page loads... $(".form-content").hide(); //Hide all content var firstMenu = $("#home-menu ul li:first"); firstMenu.show(); firstMenu.find("a").addClass("selected-link"); //Activate first tab $(".form-content:first").show(); //Show first tab content //On Click Event $("#home-menu ul li").click(function() { $("#home-menu ul li a").removeClass("selected-link"); //Remove any "selected-link" class $(this).find("a").addClass("selected-link"); //Add "selected-link" class to selected tab $(".form-content").hide(); //Hide all tab content var activeTab = $(this).find("a").attr("href"); //Find the href attribute value to identify the selected-link tab + content $(activeTab).fadeIn(); //Fade in the selected-link ID content return false; }); });

    Read the article

  • How to use boost::fusion::transform on heterogeneous containers?

    - by Kyle
    Boost.org's example given for fusion::transform is as follows: struct triple { typedef int result_type; int operator()(int t) const { return t * 3; }; }; // ... assert(transform(make_vector(1,2,3), triple()) == make_vector(3,6,9)); Yet I'm not "getting it." The vector in their example contains elements all of the same type, but a major point of using fusion is containers of heterogeneous types. What if they had used make_vector(1, 'a', "howdy") instead? int operator()(int t) would need to become template<typename T> T& operator()(T& const t) But how would I write the result_type? template<typename T> typedef T& result_type certainly isn't valid syntax, and it wouldn't make sense even if it was, because it's not tied to the function.

    Read the article

  • Are UTF16 (as used by for example wide-winapi functions) characters always 2 byte long?

    - by Cray
    Please clarify for me, how does UTF16 work? I am a little confused, considering these points: There is a static type in C++, WCHAR, which is 2 bytes long. (always 2 bytes long obvisouly) Most of msdn and some other documentation seem to have the assumptions that the characters are always 2 bytes long. This can just be my imagination, I can't come up with any particular examples, but it just seems that way. There are no "extra wide" functions or characters types widely used in C++ or windows, so I would assume that UTF16 is all that is ever needed. To my uncertain knowledge, unicode has a lot more characters than 65535, so they obvisouly don't have enough space in 2 bytes. UTF16 seems to be a bigger version of UTF8, and UTF8 characters can be of different lengths. So if a UTF16 character not always 2 bytes long, how long else could it be? 3 bytes? or only multiples of 2? And then for example if there is a winapi function that wants to know the size of a wide string in characters, and the string contains 2 characters which are each 4 bytes long, how is the size of that string in characters calculated? Is it 2 chars long or 4 chars long? (since it is 8 bytes long, and each WCHAR is 2 bytes)

    Read the article

  • How would you start automating my job?

    - by Jurily
    At my new job, we sell imported stuff. In order to be able to sell said stuff, currently the following things need to happen for every incoming shipment: Invoice arrives, in the form of an email attachment, Excel spreadsheet Monkey opens invoice, copy-pastes the relevant part of three columns into the relevant parts of a spreadsheet template, where extremely complex calculations happen, like =B2*550 Monkey sends this new spreadsheet to boss (email if lucky, printer otherwise), who sets the retail price Monkey opens the reply, then proceeds to input the data into the production database using a client program that is unusable on so many levels it's not even worth detailing Monkey fires up HyperTerminal, types in "AT", disconnect Monkey sends text messages and emails to customers using another part of the horrible client program, one at a time I want to change Monkey from myself to software wherever possible. I've never written anything that interfaces with email, Excel, databases or SMS before, but I'd be more than happy to learn if it saves me from this. Here's my uneducated wishlist: Monkey asks Thunderbird (mail server perhaps?) for the attachment Monkey tells Excel to dump the spreadsheet into a more Jurily-friendly format, like CSV or something Monkey parses the output, does the complex calculations // TODO: find a way to get the boss-generated prices with minimal manual labor involved Monkey connects to the database, inserts data Monkey spams costumers Is all this feasible? If yes, where do I start reading? How would you improve it? What language/framework do you think would be ideal for this? What would you do about the boss?

    Read the article

  • Rails and SWFUpload: Sporadic Failure on Upload and Response Issue

    - by Gimli
    I've got SWFUpload version 2.5 beta 3 attached to my Rails 2.3.2 application and it works great 75% of the time. The other 25%, I get types of failure. The first failure is a failure to upload. The activity starts, but never actually sends the file to the servers. In my JS, when setting breakpoints, it stops between setting the post params and starting the upload (last two lines below): var params = { "authenticity_token": authToken, "photo[name]": $("#photo_name").val(), "photo[description]": $("#photo_description").val(), "photo[post_id]": $("#post_id").val() }; params[keyName] = key; swfu.setPostParams(params); swfu.startUpload(); It just occurred to me that the file might not being queued, but I've got a handler attached to show the file name in a text box and that works consistently. The second issue is this: Sometimes the response string is truncated. I'm rendering a partial in Rails without a layout to show the uploaded file data in my layout. Most of the time this partial comes through fine, but sometimes it comes through as only the first line, or only parts of the first several lines. The variables seem to be coming through into the view just fine. Any ideas? Thanks!

    Read the article

  • How can I lookup data about a book from its barcode number?

    - by Joel Spolsky
    I'm building the world's simplest library application. All I want to be able to do is scan in a book's UPC (barcode) using a typical scanner (which just types the numbers of the barcode into a field) and then use it to look up data about the book... at a minimum, title, author, year published, and either the Dewey Decimal or Library of Congress catalog number. The goal is to print out a tiny sticker ("spine label") with the card catalog number that I can stick on the spine of the book, and then I can sort the books by card catalog number on the shelves in our company library. That way books on similar subjects will tend to be near each other, for example, if you know you're looking for a book about accounting, all you have to do is find SOME book about accounting and you'll see the other half dozen that we have right next to it which makes it convenient to browse the library. There seem to be lots of web APIs to do this, including Amazon and the Library of Congress. But those are all extremely confusing to me. What I really just want is a single higher level function that takes a UPC barcode number and returns some basic data about the book.

    Read the article

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

  • What is the rationale to not allow overloading of C++ conversions operator with non-member function

    - by Vicente Botet Escriba
    C++0x has added explicit conversion operators, but they must always be defined as members of the Source class. The same applies to the assignment operator, it must be defined on the Target class. When the Source and Target classes of the needed conversion are independent of each other, neither the Source can define a conversion operator, neither the Target can define a constructor from a Source. Usually we get it by defining a specific function such as Target ConvertToTarget(Source& v); If C++0x allowed to overload conversion operator by non member functions we could for example define the conversion implicitly or explicitly between unrelated types. template < typename To, typename From > operator To(const From& val); For example we could specialize the conversion from chrono::time_point to posix_time::ptime as follows template < class Clock, class Duration> operator boost::posix_time::ptime( const boost::chrono::time_point<Clock, Duration>& from) { using namespace boost; typedef chrono::time_point<Clock, Duration> time_point_t; typedef chrono::nanoseconds duration_t; typedef duration_t::rep rep_t; rep_t d = chrono::duration_cast<duration_t>( from.time_since_epoch()).count(); rep_t sec = d/1000000000; rep_t nsec = d%1000000000; return posix_time::from_time_t(0)+ posix_time::seconds(static_cast<long>(sec))+ posix_time::nanoseconds(nsec); } And use the conversion as any other conversion. For a more complete description of the problem, see here or on my Boost.Conversion library.. So the question is: What is the rationale to non allow overloading of C++ conversions operator with non-member functions?

    Read the article

  • Setting Nullable Integer to String Containing Nothing yields 0

    - by Brian MacKay
    I've been pulling my hair out over some unexpected behavior from nullable integers. If I set an Integer to Nothing, it becomes Nothing as expected. If I set an Integer? to a String that is Nothing, it becomes 0! Of course I get this whether I explicitly cast the String to Integer? or not. I realize I could work around this pretty easily but I want to know what I'm missing. Dim NullString As String = Nothing Dim NullableInt As Integer? = CType(NullString, Integer?) 'Expected NullableInt to be Nothing, but it's 0! NullableInt = Nothing 'This works -- NullableInt now contains Nothing. How is this EDIT: Previously I had my code up here so without the explicit conversion to 'Integer?' and everyone seemed to be fixated on that. I want to be clear that this is not an issue that would have been caught by Option Strict On -- check out the accepted answer. This is a quirk of the string-to-integer conversion rules which predate nullable types, but still impact them.

    Read the article

  • Google Maps: Simple app not working on IE

    - by Peter Bridger
    We have a simple Google Maps traffic application up at: http://www.avonandsomerset.police.uk/newsroom/traffic/ For some reason it's recently stopped working in IE correctly. At this point in time it was using V2 of the API, so I've just upgraded it to use V3 - but it still won't work in IE. It works fine in Chrome & Firefox. But in all versions of IE I've tired (6,7,8) the Google Map doesn't load fully. The problem The Google Map DIV will generally load all the controls (Zoom, Powered by Google, map types) but the actual map tiles do not appear in IE. I can just see the grey background of the DIV What I've tried I've commented down the JavaScript code to just the following on the page, but it still has the same problem: <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript" src="http://maps.google.com/maps/api/js?sensor=false"></script> <script type="text/javascript" > var map; $(document).ready(function () { initialize(); // Set-up Google map }); function initialize() { var options = { zoom: 9, center: new google.maps.LatLng(51.335759, -2.870178), mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("googleMap"), options); } </script>

    Read the article

  • check properties of two objects for changes

    - by k-hoffmann
    Hi, i have to develop a mechanism to check two object properties for changes. All properties which are needed to check are marked with an attribute. Atm i - read all properties from acutal object via linq - read the corresponding property from old object - fill an own object with the two properties (old and new value) In Code the call to the workerclass looks like this public void CreateHistoryMap(BaseEntity actual, BaseEntity old) { CreateHistoryMap(actualEntity, oldEntity) .ForEach(mapEntry => CreateHistoryEntry(mapEntry), mapEntry => IfChangesDetected(mapEntry)); } CreateHistoryMap builds up the HistoryMapEntry which contains the two properties. CreateHistoryEntry build up the object which is saved to database, the IfChangesDetected check the object for changes. I have to handle own special application types to generate history values to database (like concatinating list values and so on). My problem is now, that i have to read the values of the properties twice - for change detection - and for the concreate CreateHistoryEntry How can i eliminate this problem or how can i implement the change tracking scenario with the nice c# 3.5 features? Thanks a lot.

    Read the article

  • Will my LinqToSql execution be deffered if i filter with IEnumerable<T> instead of IQueryable<T>?

    - by cottsak
    I have been using these common EntityObjectFilters as a "pipes and filters" way to query from a collection a particular item with an ID: public static class EntityObjectFilters { public static T WithID<T>(this IQueryable<T> qry, int ID) where T : IEntityObject { return qry.SingleOrDefault<T>(item => item.ID == ID); } public static T WithID<T>(this IList<T> list, int ID) where T : IEntityObject { return list.SingleOrDefault<T>(item => item.ID == ID); } } ..but i wondered to myself: "can i make this simpler by just creating an extension for all IEnumerable<T> types"? So i came up with this: public static class EntityObjectFilters { public static T WithID<T>(this IEnumerable<T> qry, int ID) where T : IEntityObject { return qry.SingleOrDefault<T>(item => item.ID == ID); } } Now while this appears to yield the same result, i want to know that when applied to IQueryable<T>s will the expression tree be passed to LinqToSql for evaluating as SQL code or will my qry be evaluated in it's entirety first, then iterated with Funcs? I'm suspecting that (as per Richard's answer) the latter will be true which is obviously what i don't want. I want the same result, but the added benefit of the delayed SQL execution for IQueryable<T>s. Can someone confirm for me what will actually happen and provide simple explanation as to how it would work?

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • What do I name classes whose only purpose is to act as a structure?

    - by Sergio Tapia
    For example, take my Actor class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Drawing; namespace FreeIMDB { class Actor { public string Name { get; set; } public Image Portrait { get; set; } public DateTime DateOfBirth { get; set; } public List<string> ActingRoles { get; set; } public List<string> WritingRoles { get; set; } public List<string> ProducingRoles { get; set; } public List<string> DirectingRoles { get; set; } } } This class will only be used to stuff information into it, and allow other developers to get their values. What are these types of classes officially called? What is the correct nomenclature?

    Read the article

  • Sync two SqlExpress using NHibernate

    - by Christian
    Hello, I am creating a simple project management system which uses NHibernate for object storage. The underlying database is SQL express (at least currently for development). The client runs on either the desktop or laptop. I know I could use web-services and store the DB only on the desktop, but this would force the desktop to be available all the time. I am currently thinking about duplicating the DB, having two instances with "different data". To clarify, we are not talking about a productive app here, its a prototype. One way to achieve this very simple would be the following process: Client: Check if desktop DB is available (through web service) Client: If yes, use desktop storage, no problem here Client: If not, use own DB as storage Client: Poll desktop regulary, as soon as it comes on, sync Client: Switch to desktop storage ... Desktop: Do not attempt any DB operation before checking for required sync Desktop: If sync needed, do it... My question is now, how would you sync? Assume 4 or 5 types of objects, all have GUID as identifiers. Would you always manually "lazy load" all objects of a certain type and feed them to the DB. Would you always drop the whole desktop DB in case the client DB may be newer and out of sync? Again, I want to stress out, I am not assuming any conflicts or stale data, I basically just want to "copy the whole DB from the client". Would you use NHibernate for this? Or would you separate the copy process? When I think about it, my questions comes down to this: Is there any function from NHibernate: SyncDBs_SourceWins_(SourceDB, TargetDB) Thanks for help, Chris

    Read the article

  • passing an "unknown enumeration" to a method

    - by firoso
    I'm currently trying to make a class that can register strings as identifiers and accociate them with different types of Enumerations, these enumerations are being evaluated only in so much that I am ensuring that when it's used, that the parameter passed to broadcast (messageType) is an instance of the associated Enum Type. This would work something like this: Diagnostics.RegisterIdentifier("logger", typeof(TestEnum)); Diagnostics.Broadcast("logger", TestEnum.Info, null, "Hello World", null); here's the code I currently have, I need to be able to verify that messageTypesEnum is contained in messageTypesFromIdentifier. private static Dictionary<string, Type> identifierMessageTypeMapping = new Dictionary<string, Type>(); private static List<IListener> listeners = new List<IListener>(); public static void RegisterIdentifier(string identifier, Type messageTypesEnum) { if (messageTypesEnum.BaseType.FullName == "System.Enum") { identifierMessageTypeMapping.Add(identifier, messageTypesEnum); } else { throw new ArgumentException("Expected type of messageTypesEnum to derive from System.Enum", "messageTypesEnum"); } } public static void Broadcast(string identifier, object messageType, string metaIdentifier, string message, Exception exception) { if (identifierMessageTypeMapping.ContainsKey(identifier)) { Type messageTypesFromIdentifier = identifierMessageTypeMapping[identifier]; foreach (var listener in listeners) { DiagnosticsEvent writableEvent = new DiagnosticsEvent(identifier, messageType, metaIdentifier, message, exception); listener.Write(writableEvent); } } }

    Read the article

  • page.insert_html not rendering partial correctly

    - by mathee
    The following is in the text_field. = f.text_field :title, :size => 50, :onchange => remote_function(:update => :suggestions, :url => {:action => :display_question_search_results}) The following is in display_questions_search_results.rjs. page.insert_html :bottom, 'suggestions', :partial => 'suggestions' Whenever the user types, I'd like to search the database for any tuples that match the keywords in the text field. Then, display those results. But, at the moment, _suggestions.haml only contains the word "suggestions!!". But, instead of seeing "suggestions!!" in the suggestions div tag, I get: try { Element.insert("suggestions", { bottom: "suggestions!!" }); } catch (e) { alert('RJS error:\n\n' + e.toString()); alert('Element.insert(\"suggestions\", { bottom: \"suggestions!!\" });'); throw e } I've been trying to find out why this is being done, but the previously asked questions I found seem more complicated than what I'm doing...

    Read the article

  • deleting object with template for int and object

    - by Yokhen
    Alright so Say I have a class with all its definition, bla bla bla... template <class DT> class Foo{ private: DT* _data; //other stuff; public: Foo(DT* data){ _data = data } virtual ~Foo(){ delete _data } //other methods }; And then I have in the main method: int main(){ int number = 12; Foo<anyRandomClass>* noPrimitiveDataObject = new Foo<anyRandomClass>(new anyRandomClass()); Foo<int>* intObject = new Foo<int>(number); delete noPrimitiveDataObject; //Everything goes just fine. delete intObject; //It messes up here, I think because primitive data types such as int are allocated in a different way. return 0; } My question is: What could I do to have both delete statements in the main method work just fine? P.S.: Although I have not actually compiled/tested this specific code, I have reviewed it extensively (as well as indented. You're welcome.), so if you find a mistake, please be nice. Thank you.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >