Search Results

Search found 3040 results on 122 pages for 'saving'.

Page 101/122 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • OpenCV: Getting and Setting Camera Settings

    - by jhaip
    I have been searching around and can't find an example of how to get and set the camera capturing settings. For example the capturing resolution, fps, color balance, etc. I have only seen examples of how to change the settings when saving the captured video but I want to be able to find all the camera's capturing modes and choose which one I want. For example, I am using the PS3eye webcam and in the test program it allows you to change the settings (320x240 at 15,30,60,120 fps, 640x480 at 15,30,60,75 fps). So is there a function in OpenCV for getting all the camera's capture modes and choosing one? I remember in OpenFrameworks there was a function to change these settings but I would like to know how to do it in OpenCV. Here is the code for OpenFrameworks with OpenCV that does sort of what I want: vidGrabber.setDeviceID( 4 ); vidGrabber.setDesiredFrameRate( 30 ); //I want this vidGrabber.videoSettings(); vidGrabber.setVerbose(true); vidGrabber.initGrabber(320,240); //And this

    Read the article

  • Core Data object into an NSDictionary with possible nil objects

    - by Chuck
    I have a core data object that has a bunch of optional values. I'm pushing a table view controller and passing it a reference to the object so I can display its contents in a table view. Because I want the table view displayed a specific way, I am storing the values from the core data object into an array of dictionaries then using the array to populate the table view. This works great, and I got editing and saving working properly. (i'm not using a fetched results controller because I don't have anything to sort on) The issue with my current code is that if one of the items in the object is missing, then I end up trying to put nil into the dictionary, which won't work. I'm looking for a clean way to handle this, I could do the following, but I can't help but feeling like there's a better way. *passedEntry is the core data object handed to the view controller when it is pushed, lets say it contains firstName, lastName, and age, all optional. if ([passedEntry firstName] != nil) { [dictionary setObject:[passedEntry firstName] forKey:@"firstName"] } else { [dictionary setObject:@"" forKey:@"firstName"] } And so on. This works, but it feels kludgy, especially if I end up adding more items to the core data object down the road.

    Read the article

  • ignoring elements order in xml validation against xsd

    - by segolas
    Hi, Ia processing an email and saving some header inside a xml document. I also need to validate the document against a xml schema. As the subject suggest, I need to validate ignoring the elements order but, as far as I read this seems to be impossible. Am I correct? If I put the headers in a<xsd:sequence>, the order obviously matter. If I us the order is ignored but for some strange reason this imply that the elements must occur at least once. My xml is something like this: <headers> <subject>bla bla bla</subject> <recipient>[email protected]</recipient> <recipient>rcp02domain.com</recipient> <recipient>[email protected]</recipient> </headers> but I think the final document is valid even if subject and recipient elements are swapped. There is really nothing to do?

    Read the article

  • Managing Many to Many relationships in asp.net Wizard Control

    - by Luis
    Say I have this entity with a lot of attributes. In the input form I have decided to implement a wizard control so I can collect information about this entity in several steps. The problem is that I need to collect information that has been modeled has many to many relationships. I am planning to use a telerik gridview to manage this (add/edit/delete), the problem is where do I store that data since the entity in a insert form is not created on the database yet. OK so I can store all that info in temporary lists residing in the viewstate, waiting for the final submit where I dump all that in the DB, but one of the steps I am collecting files...now storing files in the viewstate is out of the question, same as as storing them in the session... I have been thinking of implementing in a way that the user has to submit some info first (say first 3 steps), commit the data to the database creating the parent entity and then start inserting all the childs entities...but this will get weird as it's confusing since on the first steps you not saving the data to the DB and on the next ones you are commiting directly... Anyone has any thoughts on this? Thanks

    Read the article

  • Can I get an XPathNodeIterator directly from an XPath?

    - by Val
    I hope I'm just missing something obvious. I have a number of repeating nodes in an XML document: <root> <parent> <child/> <child/> </parent> </root> I need to examine the contents of each of the <child> elements in turn, so I need an XPathNodeIterator containing the nodeset of all the <child> nodes. If I have an XPath that would select the child nodes, e.g. /root/parent/child, is there any way to feed that directly to a new XPathNodeIterator? Everything I see in the docs and examples indicates I have to first get an XPathNavigator to the <parent>, then Select the child nodes, like: XPathNavigator nav = datasource.CreateNavigator().SelectSingleNode( "/root/parent" ); XPathNodeIterator it = nav.Select( "./child" ); foreach ( child in it ) { /* do something */ } I was hoping to skip the XPathNavigator, and intialize the XPathNodeIterator with XPath to the child nodes directly, something like: XpathNodeIterator it = new XpathNodeIterator("/root/parent/child"); foreach ( child in it ) { /* do something */ } Possible? The benefit is not only saving a line of code, but I can use a single XPath expression, rather than splitting the path to the <child> nodes in two, first to get the parent element, then to select its children.

    Read the article

  • Should I use a binary or a text file for storing protobuf messages?

    - by nbolton
    Using Google protobuf, I am saving my serialized messaged data to a file - in each file there are several messages. We have both C++ and Python versions of the code, so I need to use protobuf functions that are available in both languages. I have experimented with using SerializeToArray and SerializeAsString and there seems to be the following unfortunate conditions: SerializeToArray: As suggested in one answer, the best way to use this is to prefix each message with it's data size. This would work great for C++, but in Python it doesn't look like this is possible - am I wrong? SerializeAsString: This generates a serialized string equivalent to it's binary counterpart - which I can save to a file, but what happens if one of the characters in the serialization result is \n - how do we find line endings, or the ending of messages for that matter? Update: Please allow me to rephrase slightly. As I understand it, I cannot write binary data in C++ because then our Python application cannot read the data, since it can only parse string serialized messages. Should I then instead use SerializeAsString in both C++ and Python? If yes, then is it best practice to store such data in a text file rather than a binary file? My gut feeling is binary, but as you can see this doesn't look like an option.

    Read the article

  • Best practice how to store HTML in a database column

    - by tbrandao
    I have an application that modifies a table dynamically, think spreadsheet), then upon saving the form (which the table is part of) ,I store that changed table (with user modifications) in a database column named html_Spreadhseet,along with the rest of the form data. right now I'm just storing the html in a plain text format with basic escaping of characters... I'm aware that this could be stored as a separate file, the source table (html_workseeet) already is. But from a data handling perspective its easier to save the changed html table to and from a column so as to avoid having to come up with a file management strategy (which folder will this live in, now must include folder in backups, security issues now need to apply to files, how to sync db security with file system etc.), so to minimize these issues I'm only storing the ... part in the database column. My question is should I gzip the HTML , maybe use JSON, or some other format to easily store and retrieve the HTML from the database column, what is the best practice to store HTML content in a datbase? Or just store it as I currently am as an escaped text column?

    Read the article

  • More efficient R / Sweave / TeXShop work-flow?

    - by user594795
    I've now got everything to work properly on my Mac OS X 10.6 machine so that I can create decent looking LaTeX documents with Sweave that include snippets of R code, output, and LaTeX formatting together. Unfortunately, I feel like my work-flow is a bit clunky and inefficient: Using TextWrangler, I write LaTeX code and R code (surrounded by <<= above and @ below R code chunk) together in one .Rnw file. After saving changes, I call the .Rnw file from R using the Sweave command Sweave(file="/Users/mymachine/Documents/Assign4.Rnw", syntax="SweaveSyntaxNoweb") In response, R outputs the following message: You can now run LaTeX on 'Assign4.tex' So then I find the .tex file (Assign4.tex) in the R directory and copy it over to the folder in my documents ~/Documents/ where the .Rnw file is sitting (to keep everything in one place). Then I open the .tex file (e.g. Assign4.tex) in TeXShop and compile it there into pdf format. It is only at this point that I get to see any changes I have made to the document and see if it 'looks nice'. Is there a way that I can compile everything with one button click? Specifically it would be nice to either call Sweave / R directly from TextWrangler or TeXShop. I suspect it might be possible to code a script in Terminal to do it, but I have no experience with Terminal. Please let me know if there's any other things I can do to streamline or improve my work flow.

    Read the article

  • XMLHttpRequest POST Data Size

    - by usurper
    Hi, Is there a size limit to a XHR POST request? I am using the POST method for saving textdata into MySQL using PHP script and the data is cut off. Firebug sends me the following message: ... Firebug request size limit has been reached by Firebug. ... This is my code for sending the data: function makeXHR(recordData) { xmlhttp = createXHR(); var body = "q=" + encodeURIComponent(recordData); xmlhttp.open("POST", "insertRowData.php", true); xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", body.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange = function() { if(xmlhttp.readyState == 4 && xmlhttp.status == 200) { //alert(xmlhttp.responseText); alert("Records were saved successfully!"); } } xmlhttp.send(body); } The only solution I can think of is splitting the data and making a queue of XHR requests but I don't like it. Is there another way?

    Read the article

  • ItemUpdating called twice after ItemAdded in event receiver

    - by Jason
    I've created an event receiver to handle the ItemAdded and ItemUpdating events on a document library in SharePoint 2010. I've encountered a problem where when I add a document to the library (e.g. by saving it back from Word) the ItemAdded method is correctly called however this is then followed by two calls to ItemUpdating. I have removed all code from my handlers to ensure that it's not something I'm doing inside that is causing the problem. They literally look like: public override void ItemUpdating(SPItemEventProperties properties) { } public override void ItemAdded(SPItemEventProperties properties) { } Does anyone have a solution to this issue? Here is my elements.xml file for the event receiver: <Elements xmlns="http://schemas.microsoft.com/sharepoint/"> <Receivers ListTemplateId="101"> <Receiver> <Name>DocumentsEventReceiverItemUpdating</Name> <Type>ItemUpdating</Type> <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly> <Class>My.Namespace.DocumentsEventReceiver</Class> <SequenceNumber>10000</SequenceNumber> <Synchronization>Synchronous</Synchronization> </Receiver> <Receiver> <Name>DocumentsEventReceiverItemAdded</Name> <Type>ItemAdded</Type> <Assembly>$SharePoint.Project.AssemblyFullName$</Assembly> <Class>My.Namespace.DocumentsEventReceiver</Class> <SequenceNumber>10000</SequenceNumber> <Synchronization>Synchronous</Synchronization> </Receiver> </Receivers> </Elements>

    Read the article

  • XML File as Excel file.

    - by FrustratedWithFormsDesigner
    I have a number of reports that I run against my database that need to eventually go to the end-users as Excel spreadsheets. Initially, I was creating text reports, but the steps to convert the text to a spreadsheet were a bit cumbersome. There were too many steps to import text to the spreadsheet, and multi-line text rows were imported as individual rows in Excel (which was incorrect). Currently, I am generating simple XML saving the file with an ".xls" extension. This works better, but there is still the problem of Excel prompting the user with an XML import dialogue every time they open the file, and then having to save a new file if they add notes or change the layout to the file (which they almost certainly will be doing). Sample "xls" file: <?xml version="1.0" standalone="yes"?> <report_rows> <row> <NAME>Test Data</NAME> <COUNT>345</COUNT> </row> <!-- many more row elements... --> </report_rows> Is there any way to add markup to the file to hint to Excel how it should import and handle the file? Ideally, the end user should be able to open and save the file like any othe spreadsheet they create directly from Excel. Is this even possible? UPDATE: We are running Office 2003 here. UPDATE: The XML is generated from a sqlplus script, no option to use C#/.NET here.

    Read the article

  • Can I use Eclipse JDT to create new 'working copies' of source files in memory only?

    - by RYates
    I'm using Eclipse JDT to build a Java refactoring platform, for exploring different refactorings in memory before choosing one and saving it. I can create collections of working copies of the source files, edit them in memory, and commit the changes to disk using the JDT framework. However, I also want to generate new 'working copy' source files in memory as part of refactorings, and only create the corresponding real source file if I commit the working copy. I have seen various hints that this is possible, e.g. http://www.jarvana.com/jarvana/view/org/eclipse/jdt/doc/isv/3.3.0-v20070613/isv-3.3.0-v20070613.jar!/guide/jdt%5Fapi%5Fmanip.htm says "Note that the compilation unit does not need to exist in the Java model in order for a working copy to be created". So far I have only been able to create a new real file, i.e. ICompilationUnit newICompilationUnit = myPackage.createCompilationUnit(newName, "package piffle; public class Baz{private int i=0;}", false, null); This is not what I want. Does anyone know how to create a new 'working copy' source file, that does not appear in my file system until I commit it? Or any other mechanism to achieve the same thing?

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Why is this ajax call being made even though it shouldn't be

    - by user2921557
    i have this validation script im working on but cant see why im having an issue. You can see that i have a check = false / true, check before it runs the ajax call. However, even if a field is empty and check is set to false, it is still running the call. so: // JavaScript - Update Password AJAX $(document).ready(function () { // When the form is submitted $('.updatepasswordform').submit(function () { var check = true; // Get the values var password1 = $("input[name=password1]").val(); var password2 = $("input[name=password2]").val(); var newpassword = $("input[name=newpassword]").val(); /* Password Validation */ // If fields are empty if (password1 === '') { check = false; $("input[name=password1]").css('border', 'solid 2px red'); } // If fields are empty if (password2 === '') { check = false; $("input[name=password2]").css('border', 'solid 2px red'); } // If fields are empty if (newpassword === '') { check = false; $("input[name=newpassword]").css('border', 'solid 2px red'); } if (check = true) { $.ajax({ type: "POST", url: "process/updatepassword.php", data: $(".updatepasswordform").serialize(), dataType: "json", success: function (response) { /* Checks for database validation, removed for space saving */ } }); } return false; }); });

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • Passing session between jsf backing bean and model

    - by Rachel
    Background : I am having backing bean which has upload method that listen when file is uploaded. Now I pass this file to parser and in parser am doing validation check for row present in csv file. If validation fails, I have to log information and saving in logging table in database. My end goal : Is to get session information in logging bean so that I can get initialContext and make call to ejb to save data to database. What is happening : In my upload backing bean, am getting session but when i call parser, I do not pass session information as I do not want parser to be dependent on session as I want to unit test parser individually. So in my parser, I do not have session information, from parser am making call to logging bean(just a bean with some ejb methods) but in this logging bean, i need session because i need to get initial context. Question Is there a way in JSF, that I can get the session in my logging bean that I have in my upload backing bean? I tried doing: FacesContext ctx = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession) ctx.getExternalContext().getSession(false); but session value was null, more generic question would be : How can I get session information in model bean or other beans that are referenced from backing beans in which we have session? Do we have generic method in jsf using which we can access session information throughout JSF Application?

    Read the article

  • Updating database row from model

    - by Jamie Dixon
    Hey everyone, I'm haing a few problems updating a row in my database using Linq2Sql. Inside of my model I have two methods for updating and saving from my controller, which in turn receives an updated model from my view. My model methods like like: public void Update(Activity activity) { _db.Activities.InsertOnSubmit(activity); } public void Save() { _db.SubmitChanges(); } and the code in my Controller likes like: [HttpPost] public ActionResult Edit(Activity activity) { if (ModelState.IsValid) { UpdateModel<Activity>(activity); _activitiesModel.Update(activity); _activitiesModel.Save(); } return View(activity); } The problem I'm having is that this code inserts a new entry into the database, even though the model item i'm inserting-on-submit contains a primary key field. I've also tried re-attaching the model object back to the data source but this throws an error because the item already exists. Any pointers in the right direction will be greatly appreciated. UPDATE: I'm using dependancy injection to instantiate my datacontext object as follows: IMyDataContext _db; public ActivitiesModel(IMyDataContext db) { _db = db; }

    Read the article

  • The proper way to script periodically pulling a page from an https site

    - by DarthShader
    I want to create a command-line script for Cygwin/Bash that logs into a site, navigates to a specific page and compares it with the results of the last run. So far, I have it working with Lynx like so: ----snpipped, just setting variables---- echo "# Command logfile created by Lynx 2.8.5rel.5 (29 Oct 2005) ----snipped the recorded keystrokes------- key Right Arrow key p key Right Arrow key ^U" >> $tmp1 #p, right arrow initiate the page saving #"type" the filename inside the "where to save" dialog for i in $(seq 0 $((${#tmp2} - 1))) do echo "key ${tmp2:$i:1}" >> $tmp1 done #hit enter and quit echo "key ^J key y key q key y " >> $tmp1 lynx -accept_all_cookies -cmd_script=$tmp1 https://thewebpage.com/login diff $tmp2 $oldComp mv $tmp2 $oldComp It definitely does not feel "right": the cmd_script consists of relative user actions instead of specifying the exact link names and actions. So, if anything on the site ever changes, switches places, or a new link is added - I will have to re-create the actions. Also, I can't check for any errors so I can't abort the script if something goes wrong (login failed, etc) Another alternative I have been looking at is Mechanize with Ruby (as a note - I have 0 experience with Ruby). What would be the best way to improve or rewrite this?

    Read the article

  • Downloading a webpage in C# example

    - by Chris
    I am trying to understand some example code on this web page: (http://www.csharp-station.com/HowTo/HttpWebFetch.aspx) that downloads a file from the internet. The piece of code quoted below goes through a loop getting chunks of data and saving them to a string until all the data has been downloaded. As I understand it, "count" contains the size of the downloaded chunk and the loop runs until count is 0 (an empty chunk of data is downloaded). My question is, isn't it possible that count could be 0 without the file being completely downloaded? Say if the network connection is interrupted, the stream may not have any data to read on a pass of the loop and count should be 0, ending the download prematurely. Or does ResStream.Read stop the program until it gets data? Is this the correct way to save a stream? int count = 0; do { // fill the buffer with data count = resStream.Read(buf, 0, buf.Length); // make sure we read some data if (count != 0) { // translate from bytes to ASCII text tempString = Encoding.ASCII.GetString(buf, 0, count); // continue building the string sb.Append(tempString); } } while (count 0); // any more data to read?

    Read the article

  • Confirm bug Magento 1.4 'show/hide editor' in CMS

    - by latvian
    Hi When entering code in CMS static block(possible page as well) and in this code there is empty DIV tags such us: <a href="javascript:hide1(),show2(),hide3()"><div class="dropoff_button"></div></a> The DIV tags will be gone next time you open the block to edit. it will look as this <a href="javascript:hide1(),show2(),hide3()"> </a> without the div tags ...and saving again it modifies your code. I think it something to do with the 'show/hide editor'. By default it goes into the WYSIWYG editor, so when updating static block i don't see any other solution than 1."hide the editor' by clicking 'show/hide editor' 2.delete the old code from the editor 3. get code that doesn't miss the DIVs 4. Merge new code with code in 3 in some other editing software than magento 5. paste result in the magento editor, 6. Save Is this bug? What is your solution? Can i turn of WYSIWYG editor?

    Read the article

  • GridView ObjectDataSource LINQ Paging and Sorting using multiple table query.

    - by user367426
    I am trying to create a pageing and sorting object data source that before execution returns all results, then sorts on these results before filtering and then using the take and skip methods with the aim of retrieving just a subset of results from the database (saving on database traffic). this is based on the following article: http://www.singingeels.com/Blogs/Nullable/2008/03/26/Dynamic_LINQ_OrderBy_using_String_Names.aspx Now I have managed to get this working even creating lambda expressions to reflect the sort expression returned from the grid even finding out the data type to sort for DateTime and Decimal. public static string GetReturnType<TInput>(string value) { var param = Expression.Parameter(typeof(TInput), "o"); Expression a = Expression.Property(param, "DisplayPriceType"); Expression b = Expression.Property(a, "Name"); Expression converted = Expression.Convert(Expression.Property(param, value), typeof(object)); Expression<Func<TInput, object>> mySortExpression = Expression.Lambda<Func<TInput, object>>(converted, param); UnaryExpression member = (UnaryExpression)mySortExpression.Body; return member.Operand.Type.FullName; } Now the problem I have is that many of the Queries return joined tables and I would like to sort on fields from the other tables. So when executing a query you can create a function that will assign the properties from other tables to properties created in the partial class. public static Account InitAccount(Account account) { account.CurrencyName = account.Currency.Name; account.PriceTypeName = account.DisplayPriceType.Name; return account; } So my question is, is there a way to assign the value from the joined table to the property of the current table partial class? i have tried using. from a in dc.Accounts where a.CompanyID == companyID && a.Archived == null select new { PriceTypeName = a.DisplayPriceType.Name}) but this seems to mess up my SortExpression. Any help on this would be much appreciated, I do understand that this is complex stuff.

    Read the article

  • Entity Framework one-to-one relationship mapping flattened in code

    - by Josh Close
    I have a table structure like so. Address: AddressId int not null primary key identity ...more columns AddressContinental: AddressId int not null primary key identity foreign key to pk of Address County State AddressInternational: AddressId int not null primary key identity foreign key to pk of Address ProvinceRegion I don't have control over schema, this is just the way it is. Now, what I want to do is have a single Address object. public class Address { public int AddressId { get; set; } public County County { get; set; } public State State { get; set } public ProvinceRegion { get; set; } } I want to have EF pull it out of the database as a single entity. When saving, I want to save the single entity and have EF know to split it into the three tables. How would I map this in EF 4.1 Code First? I've been searching around and haven't found anything that meets my case yet. UPDATE An address record will have a record in Address and one in either AddressContinental or AddressInternational, but not both.

    Read the article

  • Where to store: User connection information?

    - by TomTom
    ;) I am writing a .NET application wher ethe user connects to a given server. ALl information within the application is stored in the server. But I want / need to store the following information for the user: The server he connected to last The username he used to connect last (and no, no password, never ever). Any idea where to store this best? the application config file is not sensible (user != admin, application.config is write protected for him). So, my options are: In the registry. 2 keys under my own subkey. In a sort of ini file, stored in the user's data directory (AppData). This would possibly also allow later expansion (into like saving more information, some of which may not fit into the registry). Anyone a tip? Other alternatives? I tend so far to go for the AppData directory with my own subfolder - simply because it is a nice preparation for later to keep like a local copy of configuration etc.

    Read the article

  • Batch script is not executed if chcp was called

    - by Andy
    Hello! I'm trying to delete some files with unicode characters in them with batch script (it's a requirement). So I run cmd and execute: > chcp 65001 Effectively setting codepage to UTF-8. And it works: D:\temp\1>dir Volume in drive D has no label. Volume Serial Number is 8C33-61BF Directory of D:\temp\1 02.02.2010 09:31 <DIR> . 02.02.2010 09:31 <DIR> .. 02.02.2010 09:32 508 1.txt 02.02.2010 09:28 12 delete.bat 02.02.2010 09:20 95 delete.cmd 02.02.2010 09:13 <DIR> Rún 02.02.2010 09:13 <DIR> ????? ??????? 3 File(s) 615 bytes 4 Dir(s) 11 576 438 784 bytes free D:\temp\1>rmdir Rún D:\temp\1>dir Volume in drive D has no label. Volume Serial Number is 8C33-61BF Directory of D:\temp\1 02.02.2010 09:56 <DIR> . 02.02.2010 09:56 <DIR> .. 02.02.2010 09:32 508 1.txt 02.02.2010 09:28 12 delete.bat 02.02.2010 09:20 95 delete.cmd 02.02.2010 09:13 <DIR> ????? ??????? 3 File(s) 615 bytes 3 Dir(s) 11 576 438 784 bytes free Then I put the same rmdir commands in batch script and save it in UTF-8 encoding. But when I run nothing happens, literally nothing: not even echo works from batch script in this case. Even saving script in OEM encoding does not help. So it seems that when I change codepage to UTF-8 in console, scripts just stop working. Does somebody know how to fix that?

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >