Search Results

Search found 15115 results on 605 pages for 'state pattern'.

Page 519/605 | < Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >

  • SQL Server Reset Identity Increment for all tables

    - by DanSpd
    Basically I need to reset Identity Increment for all tables to its original. Here I tried some code, but it fails. http://pastebin.com/KSyvtK5b actual code from link: USE World00_Character GO -- Create a cursor to loop through the System Ojects and get each table name DECLARE TBL_CURSOR CURSOR -- Declare the SQL Statement to cursor through FOR ( SELECT Name FROM Sysobjects WHERE Type='U' ) -- Declare the @SQL Variable which will hold our dynamic sql DECLARE @SQL NVARCHAR(MAX); SET @SQL = ''; -- Declare the @TblName Variable which will hold the name of the current table DECLARE @TblName NVARCHAR(MAX); -- Open the Cursor OPEN TBL_CURSOR -- Setup the Fetch While that will loop through our cursor and set @TblName FETCH NEXT FROM TBL_CURSOR INTO @TblName -- Do this while we are not at the end of the record set WHILE (@@FETCH_STATUS <> -1) BEGIN -- Appeand this table's select count statement to our sql variable SET @SQL = @SQL + ' ( SELECT '''+@TblName+''' AS Table_Name,COUNT(*) AS Count FROM '+@TblName+' ) UNION'; -- Delete info EXEC('DBCC CHECKIDENT ('+@TblName+',RESEED,(SELECT IDENT_SEED('+@TblName+')))'); -- Pull the next record FETCH NEXT FROM TBL_CURSOR INTO @TblName -- End the Cursor Loop END -- Close and Clean Up the Cursor CLOSE TBL_CURSOR DEALLOCATE TBL_CURSOR -- Since we were adding the UNION at the end of each part, the last query will have -- an extra UNION. Lets trim it off. SET @SQL = LEFT(@SQL,LEN(@SQL)-6); -- Lets do an Order By. You can pick between Count and Table Name by picking which -- line to execute below. SET @SQL = @SQL + ' ORDER BY Count'; --SET @SQL = @SQL + ' ORDER BY Table_Name'; -- Now that our Dynamic SQL statement is ready, lets execute it. EXEC (@SQL); GO error message: Error: Msg 102, Level 15, State 1, Line 1 Incorrect syntax near '('. How can I either fix that SQL or reset identity for all tables to its original? Thank you

    Read the article

  • Rhino ServiceBus: Sagas with multiple messages

    - by illdev
    I have a saga that can handle multiple messages like so: public class OrderSaga : ISaga<Order> , InitiatedBy<StartOrderSaga> , Orchestrates<CancelOrder> , Orchestrates<PaymentForOrderReceived> , Orchestrates<CheckOrderWasPaid> , Orchestrates<OrderAbandoned> , Orchestrates<CheckOrderHasBeenShipped> , Orchestrates<OrderShipped> , Orchestrates<CheckOrderHasDelayDuringShipment> , Orchestrates<OrderArrivedAtDestination> , Orchestrates<OrderCompleted> {...} but only Orchestrates<CancelOrder seems to be picked up. So I suppose (I did not find the line, but am under a strong impression this is so), that only the first Orchestrates is registered. Probably this is by design. From what I imagined a saga to be, it seems only logical that it receives many different messages, but I might be wrong. I might be wrong with my whole assumption, too :) How am I supposed to handle this? Are Sagas supposed to only handle one (in my case) a ChangeStateMessage<State or should I wire the other ConsumerOfs/Orchestrates by hand?

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • How to design Models the correct way: Object-oriented or "Package"-oriented?

    - by ajsie
    I know that in OOP you want every object (from a class) to be a "thing", eg. user, validator etc. I know the basics about MVC, how they different parts interact with each other. However, i wonder if the models in MVC should be designed according to the traditional OOP design, that is to say, should every model be a database/table/row (solution 2)? Or is the intention more like to collect methods that are affecting the same table or a bunch of related tables (solution 1). example for an Address book module in CodeIgniter, where i want be able to "CRUD" a Contact and add/remove it to/from a CRUD-able Contact Group. Models solution 1: bunching all related methods together (not real object, rather a "package") class Contacts extends Model { function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } Models solution 2: the OOP way (one class per file) class Contact extends Model { private $name = ''; private $id = ''; function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} } class ContactGroup extends Model { private $name = ''; private $id = ''; function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } i dont know how to think when i want to create the models. and the above examples are my real tasks for creating an Address book. Should i just bunch all functions together in one class. then the class contains different logic (contact and group), so it can not hold properties that are specific for either one of them. the solution 2 works according to the OOP. but i dont know why i should make such a dividing. what would the benefits be to have a Contact object for example. Its surely not a User object, so why should a Contact "live" with its own state (properties and methods). you experienced guys with OOP/MVC, please shed a light on how one should think here in this very concrete task.

    Read the article

  • Detecting branch reintegration or merge in pre-commit script

    - by Shawn Chin
    Within a pre-commit script, is it possible (and if so, how) to identify commits stemming from an svn merge? svnlook changed ... shows files that have changed, but does not differentiate between merges and manual edits. Ideally, I would also like to differentiate between a standard merge and a merge --reintegrate. Background: I'm exploring the possibility of using pre-commit hooks to enforce SVN usage policies for our project. One of the policies state that some directories (such as /trunk) should not be modified directly, and changed only through the reintegration of feature branches. The pre-commit script would therefore reject all changes made to these directories apart from branch reintegrations. Any ideas? Update: I've explored the svnlook command, and the closest I've got is to detect and parse changes to the svn:mergeinfo property of the directory. This approach has some drawback: svnlook can flag up a change in properties, but not which property was changed. (a diff with the proplist of the previous revision is required) By inspecting changes in svn:mergeinfo, it is possible to detect that svn merge was run. However, there is no way to determine if the commits are purely a result of the merge. Changes manually made after the merge will go undetected. (related post: Diff transaction tree against another path/revision)

    Read the article

  • Getting "[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Microsoft.'

    - by brohjoe
    Hi Experts, I'm getting an error, "[Microsoft][ODBC SQL Server Driver][SQL Server]Incorrect syntax near 'Microsoft.' Here is the code: Dim conn As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object which is the DSN name. Set conn = New ADODB.Connection conn.ConnectionString = "sql_server" conn.Open 'conn.Execute (strSQL) On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE(Microsoft.ACE.OLEDB.12.0;" & _ "Data Source=C:\Workbook.xlsx;" & _ "Extended Properties=Excel 12.0; [Sales Orders])" conn.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the cntection objects Set rst = Nothing Set conn = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. conn.Close If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set conn = Nothing End If End Sub

    Read the article

  • [Java] Cluster Shared Cache

    - by GuiSim
    Hi everyone. I am searching for a java framework that would allow me to share a cache between multiple JVMs. What I would need is something like Hazelcast but without the "distributed" part. I want to be able to add an item in the cache and have it automatically synced to the other "group member" cache. If possible, I'd like the cache to be sync'd via a reliable multicast (or something similar). I've looked at Shoal but sadly the "Distributed State Cache" seems like an insufficient implementation for my needs. I've looked at JBoss Cache but it seems a little overkill for what I need to do. I've looked at JGroups, which seems to be the most promising tool for what I need to do. Does anyone have experiences with JGroups ? Preferably if it was used as a shared cache ? Any other suggestions ? Thanks ! EDIT : We're starting tests to help us decide between Hazelcast and Infinispan, I'll accept an answer soon. EDIT : Due to a sudden requirements changes, we don't need a distributed map anymore. We'll be using JGroups for a low level signaling framework. Thanks everyone for you help.

    Read the article

  • InfoPath FormControl on STA worker thread

    - by Rob Ford
    I have a .NET class that exposes two public methods: one to create an InfoPath form and another to Export to one of the supported formats. I’m using the Microsoft FormControl to do this. It’s hosted by a Form that does not get displayed. I get called by a Winforms app, but on an MTA worker thread instead of the UI thread. So I create an STA thread and execute on that, which works exactly once and then results in this exception: System.InvalidOperationException was unhandled Message="Unable to get the window handle for the 'FormControl' control. Windowless ActiveX controls are not supported." Source="System.Windows.Forms" StackTrace: at System.Windows.Forms.AxHost.EnsureWindowPresent() at System.Windows.Forms.AxHost.InPlaceActivate() at System.Windows.Forms.AxHost.TransitionUpTo(Int32 state) at System.Windows.Forms.AxHost.CreateHandle() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.AxHost.EndInit() at InfoPathCreateStaThreadTest.FormControlHost.InitializeComponent() in C:\Windows\Temp\InfoPathCreateStaThreadTest\InfoPathCreateStaThreadTest\FormControlHost.Designer.cs:line 65 After some experimenting, I started to suspect this is a message pumping problem. I then came across this, which makes me more strongly suspect so: http://blogs.msdn.com/cbrumme/archive/2004/02/02/66219.aspx I tried various methods of pumping messages with no luck. I should mention that the alternative of automating the InfoPath app is not viable unless I can figure out how to hide the app. Any help would be much appreciated.

    Read the article

  • How do I pass a LuaTable between two Lua states using LuaInterface?

    - by user316675
    I've been trying to pass a LuaTable class between two Lua states, like so: LuaManager L1 = new Lua(); LuaManager L2 = new Lua(); LuaTable table = L1.DoString("return {apple = 25}")[0]; L2["tbl"] = table; double results = L2.DoString("return tbl[\"apple\"]")[0]; Assert.AreEqual(25.0, results); The above test fails; I receive a return value of nil. Using the Immediate Window confirms that "table" is a non-null object, and that table["apple"] returns 25; it's something that's being lost in translation to L2. Interestingly, when the object is loaded back into the same state, the test works, like so: //Succeeds LuaManager lua = new Lua(); LuaTable table = lua.DoString("return {apple = 25}")[0]; lua["tbl"] = table; double results = lua.DoString("return tbl[\"apple\"]")[0]; Assert.AreEqual(25.0, results); How can I safely pass the LuaTables without hassles? Thanks in advance!

    Read the article

  • Converting TrueClass / FalseClass to integer.

    - by Nick Gorbikoff
    Hello. I'm trying to figure out if there is an easy way to do the following short of adding to_i method to TrueClass/FalseClass. Here is a dilemma: I have a boolean field in my rails app - that is obviously stored as Tinyint in mysql. However - I need to generate xml based of the data in mysql and send it to customer - there SOAP service requires the field in question to have 0 or 1 as the value of this field. So at the time of the xml generation I need to convert my False to 0 and my True to 1 ( which is how they are stored in the DB). Since True & False lack to_i method I could write some if statement that generate either 1 or 0 depending on true/false state. However I have about 10 of these indicators and creating and if/else for each is not very DRY. So what you recommend I do? Or I could add a to_i method to the True / False class. But I'm not sure where should I scope it in my rails app? Just inside this particular model or somewhere else?

    Read the article

  • WCF JSON Service returns XML on Fault

    - by Anthony Johnston
    I am running a ServiceHost to test one of my services and all works fine until I throw a FaultException - bang I get XML not JSON my service contract - lovely /// <summary> /// <para>Get category by id</para> /// </summary> [OperationContract(AsyncPattern = true)] [FaultContract(typeof(CategoryNotFound))] [FaultContract(typeof(UnexpectedExceptionDetail))] IAsyncResult BeginCategoryById( CategoryByIdRequest request, AsyncCallback callback, object state); CategoryByIdResponse EndCategoryById(IAsyncResult result); Host Set-up - scrummy yum var host = new ServiceHost(serviceType, new Uri(serviceUrl)); host.AddServiceEndpoint( serviceContract, new WebHttpBinding(), "") .Behaviors.Add( new WebHttpBehavior { DefaultBodyStyle = WebMessageBodyStyle.Bare, DefaultOutgoingResponseFormat = WebMessageFormat.Json, FaultExceptionEnabled = true }); host.Open(); Here's the call - oo belly ache var request = WebRequest.Create(serviceUrl + "/" + serviceName); request.Method = "POST"; request.ContentType = "application/json; charset=utf-8"; request.ContentLength = 0; try { // receive response using (var response = request.GetResponse()) { var responseStream = response.GetResponseStream(); // convert back into referenced object for verification var deserialiser = new DataContractJsonSerializer(typeof (TResponseData)); return (TResponseData) deserialiser.ReadObject(responseStream); } } catch (WebException wex) { var response = wex.Response; using (var responseStream = response.GetResponseStream()) { // convert back into fault //var deserialiser = new DataContractJsonSerializer(typeof(FaultException<CategoryNotFound>)); //var fex = (FaultException<CategoryNotFound>)deserialiser.ReadObject(responseStream); var text = new StreamReader(responseStream).ReadToEnd(); var fex = new Exception(text, wex); Logger.Error(fex); throw fex; } } the text var contains the correct fault, but serialized as Xml What have I done wrong here?

    Read the article

  • Strategy for WCF server with .Net clients and Android clients?

    - by D.H.
    I am using WCF to write a server that should be able to communicate with .Net clients, Android clients and possibly other types of clients. The main type of client is a desktop application that will be written in .Net. This client will usually be on the same intranet as the server. It will make an initial call to the server to get the current state of the system and will then receive updates from the server whenever a value changes. These updates are frequent, perhaps once a second. The Android clients will connect over the Internet. This client is also interested in updates, but it is not as critical as for the desktop client so a (less frequent) polling scenario might be acceptable. All clients will have to login to use the services, and when connecting over the Internet the connection should be secure. I am familiar with WCF but I am not sure what bindings are most appropriate for the scenario and what security solution to use. Also, I have not used Android, but I would like to make it as simple as possible for the person implementing the Android client to consume my services. So, what is my strategy?

    Read the article

  • iPhone SDK: Progressive UITableView updates, smoothly animated

    - by boliva
    Hi all, There's something I've been trying to do with no success. I have an UITableView whose viewController is both its delegate and dataSource. On it I also have a mutableArray as a property, which basically acts as the tableView dataSource. This mutableArray gets continously updated from an online XML source. What I haven't been able to accomplish successfully is to animate the adding of new rows at the top of the tableView and the deletion of the oldest rows at its bottom (thinking I'll always going to show the same amount of data, say the latest 10 entries). The method I'm actually using is that, after finishing receiving and parsing the XML data I'm removing all the contents of the mutableArray and storing the new entries on it, to later reload the tableView (calling the 'reloadData' method). In between I'm using a CATransition in order to 'fade' the tableView from its previous to its new state. I tried using beginUpdates and endUpdates, with code for selectively removing the old rows from the tableView and adding the new ones to it, as well as to its datasource, however the animation takes place in the whole table instead of the affected rows only. What I want to accomplish is, after an update, to delete as many rows from the 'bottom' of the table as new entries I have retrieved, sliding the tableView down and adding the new rows at the top of it, smoothly animated. Best regards,

    Read the article

  • Why does the following Toggle function perform four different operations instead of two?

    - by marcamillion
    You can see the implementation here: http://jsfiddle.net/BMWZd/8/ What I am trying to do, when you click on 'John Brown', you see the first element at top turn black. When you click it again, the border of the dotted circle disappears, then when you click 'John Brown' again, you see something else, then finally once again it all disappears. What I am trying to achieve is when you click it once, everything turns black (like it does now), then you click it again, everything disappears and goes back to the original state. Important distinction, what I mean is...when one of the names in the box are not clicked. So if you clicked John Brown then moved to Jack Dorsey, the #1 at top should stay black. But if you were to click Jack Dorsey again, i.e. you 'unclicked' it, then it should disappear. Also, how do I tighten it up, so that it responds quicker. Now when you click it, it feels like there is a little bit of a lag between when it was clicked and when it responds. Edit1: If anyone is interested...the UI that this will be used in is for my webapp - http://www.compversions.com

    Read the article

  • Protocol specific channel handlers

    - by Mickael Marrache
    I'm writing an application server that will receive SIP and DNS messages from the network. When I receive a message from the network, I understand from the documentation that at first, I get a ChannelBuffer. I would like to determine which kind of message has been received (SIP or DNS) and to decode it. To determine the message type, I can dedicate port to each type of message, but I would be interested to know if there exist another solution for that. My question is more about how to decode the ChannelBuffer. Is there a ChannelHandler provided by Netty to decode SIP or DNS messages? If not, what would be the right place in the type hierarchy to write my custom ChannelHandler? To illustrate my question, let's take as example the HttpRequestDecoder, the hierarchy is: java.lang.Object org.jboss.netty.channel.SimpleChannelUpstreamHandler org.jboss.netty.handler.codec.frame.FrameDecoder org.jboss.netty.handler.codec.replay.ReplayingDecoder<HttpMessageDecoder.State> org.jboss.netty.handler.codec.http.HttpMessageDecoder org.jboss.netty.handler.codec.http.HttpRequestDecoder Also, do I need to use two different ChannelHandler for decoding and encoding, or is there a possibility to use a single ChannelHandler for both? Thanks

    Read the article

  • How to sort a date array in PHP

    - by Click Upvote
    I have an array in this format: Array ( [0] => Array ( [28th February, 2009] => 'bla' ) [1] => Array ( [19th March, 2009] => 'bla' ) [2] => Array ( [5th April, 2009] => 'bla' ) [3] => Array ( [19th April, 2009] => 'bla' ) [4] => Array ( [2nd May, 2009] => 'bla' ) ) I want to sort them out in the ascending order of the dates (based on the month, day, and year). What's the best way to do that? Originally the emails are being fetched in the MySQL date format, so its possible for me to get the array in this state: Array [ ['2008-02-28']='some text', ['2008-03-06']='some text' ] Perhaps when its in this format, I can loop through them, remove all the '-' (hyphen) marks so they are left as integars, sort them using array_sort() and loop through them yet again to sort them? Would prefer if there was another way as I'd be doing 3 loops with this per user. Thanks. Edit: I could also do this: $array[$index]=array('human'=>'28 Feb, 2009', 'db'=>'20080228', 'description'=>'Some text here'); But using this, would there be any way to sort the array based on the 'db' element alone? Edit 2: Updated initial var_dump

    Read the article

  • Preserving SCRIPT tags (and more) in CKEditor

    - by Jonathan Sampson
    Update: I'm thinking the solution to this problem is in CKEDITOR.config.protectedSource(), but my regular-expression experience is proving to be too juvenile to handle this issue. How would I go about exempting all tags that contain the 'preserved' class from being touched by CKEditor? Is it possible to create a block of code within the CKEditor that will not be touched by the editor itself, and will be maintained in its intended-state until explicitly changed by the user? I've been attempting to input javascript variables (bound in script tags) and a flash movie following, but CKEditor continues to rewrite my pasted code/markup, and in doing so breaking my code. I'm working with the following setup: <script type="text/javascript"> var editor = CKEDITOR.replace("content", { height : "500px", width : "680px", resize_maxWidth : "680px", resize_minWidth : "680px", toolbar : [ ['Source','-','Save','Preview'], ['Cut','Copy','Paste','PasteText','PasteFromWord','-','Print', 'SpellChecker', 'Scayt'], ['Undo','Redo','-','Find','Replace','-','SelectAll','RemoveFormat'], ['Bold','Italic','Underline','Strike','-','Subscript','Superscript'], ['NumberedList','BulletedList','-','Outdent','Indent','Blockquote'], ['JustifyLeft','JustifyCenter','JustifyRight','JustifyBlock'], ['Link','Unlink','Anchor'], ['Image','Table','HorizontalRule','SpecialChar'] ] }); CKFinder.SetupCKEditor( editor, "<?php print url::base(); ?>assets/ckfinder" ); </script> UPDATE: I suppose the most ideal solution would be to preserve the contents of any tag that contains class="preserve" enabling much more than the limited exclusives.

    Read the article

  • DataTable throwing exception on RejectChanges

    - by Vale
    I found this bug while working with a DataTable. I added a primary key column to a DataTable, than added one row to that table, removed that row, and added row with the same key to the table. This works. When I tried to call RejectChanges() on it, I got ConstraintException saying that value is already present. Here is the example: var dataTable = new DataTable(); var column = new DataColumn("ID", typeof(decimal)); dataTable.Columns.Add(column); dataTable.PrimaryKey = new [] {column }; decimal id = 1; var oldRow = dataTable.NewRow(); oldRow[column] = id; dataTable.Rows.Add(oldRow); dataTable.AcceptChanges(); oldRow.Delete(); var newRow = dataTable.NewRow(); newRow[column] = id; dataTable.Rows.Add(newRow); dataTable.RejectChanges(); // This is where it crashes I think since the row is deleted, exception should not be thrown (constraint is not violated because row is in deleted state). Is there something I can do about this? Any help is appreciated.

    Read the article

  • Using Facebook Login to create a user?

    - by andbeyond
    I've read this SO post which led me to this FB policy page, which seemed to include some pertinent information, but I'd like more of a community response, maybe some experienced FB API people who know the limits. My question is if I can use Facebook's Login api to, essentially, create a new user on my website. I really would just like to allow users to easily "transfer" some data from FB in order to more easily create a new account on my site. I realize, first and foremost, that I would obviously announce to the user that by click "submit" in the form, that they are creating a separate account on my site. Pertinent blocks on the policy page state: You may cache data you receive through use of the Facebook API in order to improve your application’s user experience, but you should try to keep the data up to date. This permission does not give you any rights to such data. Which doesn't look good for me, but also this: If you stop using Platform or we disable your application, you must delete all data you have received through use of the Facebook API unless: (a) it is basic account information; or (b) you have received explicit consent from the user to retain their data. Which, in my case, I would be satisfying part B. I would be asking the user's permission to retain the data, as I am simply using Facebook as a conveience to the user when creating an account. I also realize that Facebook has a registration API, but this would require a Facebook styled login form, along with my own sites login form, and I'd rather one interface, as this makes it easier for me on the front and back end. Any thoughts?

    Read the article

  • ASP.NET MVC Paging for a search form

    - by James Alexander
    I've read several different posts on paging w/ in MVC but none describe a scenario where I have something like a search form and then want to display the results of the search criteria (with paging) beneath the form once the user clicks submit. My problem is that, the paging solution I'm using will create <a href="..."> links that will pass the desired page like so: http://mysite.com/search/2/ and while that's all fine and dandy, I don't have the results of the query being sent to the db in memory or anything so I need to query the DB again. If the results are handled by the POST controller action for /Search and the first page of the data is rendered as such, how do I get the same results (based on the form criteria specified by the user) when the user clicks to move to page 2? Some javascript voodoo? Leverage Session State? Make my GET controller action have the same variables expected by the search criteria (but optional), when the GET action is called, instantiate a FormCollection instance, populate it and pass it to the POST action method (there-by satisfying DRY)? Can someone point me in the right direction for this scenario or provide examples that have been implemented in the past? Thanks!

    Read the article

  • [Cocoa] Binding CoreData Managed Object to NSTextFieldCell subclass

    - by ndg
    I have an NSTableView which has its first column set to contain a custom NSTextFieldCell. My custom NSTextFieldCell needs to allow the user to edit a "desc" property within my Managed Object but to also display an "info" string that it contains (which is not editable). To achieve this, I followed this tutorial. In a nutshell, the tutorial suggests editing your Managed Objects generated subclass to create and pass a dictionary of its contents to your NSTableColumn via bindings. This works well for read-only NSCell implementations, but I'm looking to subclass NSTextFieldCell to allow the user to edit the "desc" property of my Managed Object. To do this, I followed one of the articles comments, which suggests subclassing NSFormatter to explicitly state which Managed Object property you would like the NSTextFieldCell to edit. Here's the suggested implementation: @implementation TRTableDescFormatter - (BOOL)getObjectValue:(id *)anObject forString:(NSString *)string errorDescription:(NSString **)error { if (anObject != nil){ *anObject = [NSDictionary dictionaryWithObject:string forKey:@"desc"]; return YES; } return NO; } - (NSString *)stringForObjectValue:(id)anObject { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; return [anObject valueForKey:@"desc"]; } - (NSAttributedString*)attributedStringForObjectValue:(id)anObject withDefaultAttributes:(NSDictionary *)attrs { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; NSAttributedString *anAttributedString = [[NSAttributedString alloc] initWithString: [anObject valueForKey:@"desc"]]; return anAttributedString; } @end I assign the NSFormatter subclass to my cell in my NSTextFieldCell subclass, like so: - (void)awakeFromNib { TRTableDescFormatter *formatter = [[[TRTableDescFormatter alloc] init] autorelease]; [self setFormatter:formatter]; } This seems to work, but is extremely patch. On occasion, clicking to edit a row will cause its value to nullify. On other occasions, the value you enter on one row will populate other rows within the table. I've been doing a lot of reading on this subject and would really like to get to the bottom of this. What's more frustrating is that my NSTextFieldCell is rendering exactly how I would like it to. This editing issue is my last obstacle! If anyone can help, that would be greatly appreciated.

    Read the article

  • C# WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • rails semi-complex STI with ancestry data model planning the routes and controllers

    - by ere
    I'm trying to figure out the best way to manage my controller(s) and models for a particular use case. I'm building a review system where a User may build a review of several distinct types with a Polymorphic Reviewable. Country (has_many reviews & cities) Subdivision/State (optional, sometimes it doesnt exist, also reviewable, has_many cities) City (has places & review) Burrow (optional, also reviewable ex: Brooklyn) Neighborhood (optional & reviewable, ex: williamsburg) Place (belongs to city) I'm also wondering about adding more complexity. I also want to include subdivisions occasionally... ie for the US, I might add Texas or for Germany, Baveria and have it be reviewable as well but not every country has regions and even those that do might never be reviewed. So it's not at all strict. I would like it to as simple and flexible as possible. It'd kinda be nice if the user could just land on one form and select either a city or a country, and then drill down using data from say Foursquare to find a particular place in a city and make a review. I'm really not sure which route I should take? For example, what happens if I have a Country, and a City... and then I decide to add a Burrow? Could I give places tags (ie Williamsburg, Brooklyn) belong_to NY City and the tags belong to NY? Tags are more flexible and optionally explain what areas they might be in, the tags belong to a city, but also have places and be reviewable? So I'm looking for suggestions for anyone who's done something related. Using Rails 3.2, and mongoid.

    Read the article

  • error detection/correction/recovery in serial protocols

    - by Jason S
    I have some designing to do for a serial protocol and am running into some questions that I figure must have been considered elsewhere. So I'm wondering if there are some recommendations for best practices in designing serial protocols. (Please either state a fact that is easily verifiable, or cite a reputable source if you make a claim.) General recommendations for websites/books are also welcome. In particular I have to deal with issues like parsing a stream of bytes into packets verifying a packet is correct (easy with a CRC, for instance) identifying reasonable types of errors that can occur (e.g. in a point-to-point serial stream, sporadic single bit errors, and dropped series of bytes, are both likely, but extra phantom bytes are unlikely; whereas with a record stored in flash memory or on a disk drive the types of errors that predominate are different) error correction or recovery (if I detect an error in a packet, can I correct it? If not, can I resync to the boundary of the next packet?) how to make variable-length packets robust to error correction / recovery. Any suggestions?

    Read the article

< Previous Page | 515 516 517 518 519 520 521 522 523 524 525 526  | Next Page >