Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 327/423 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • scripting a google docs form submission

    - by justSteve
    I'm trying to create a bookmarklet that parses a page and sends the results to a googledocs spreadsheet via a form that I've defined. The relevent bit of the script is: var form = document.createElement("form"); form.action = "http://spreadsheets.google.com/formResponse?formkey=Fd0SHgwQ3YwSFd5UHZpM1QxMlNOdlE6MA&ifq"; form.method = "POST"; form.id="ss-form"; form.innerHTML = ["<input id='entry_0' name = 'entry.0.single' value = '" + orderDate + "'/>", "<input name = 'entry.2.single' value = '" + email + "'/>", "<input name = 'entry.3.single' value = '" + customerID + "'/>", ].join(""); form.submit(); alert(form.innerHTML); // returns: Nothing is being saved to the form via the bookmarklet - any way to capture google's response in my bookmarklet's code? (fwiw, i've injected jQuery via jQueryify) EDIT: Firebug's Net panel isn't hearing any of the activity triggered by the bookmarklet - How about i approach this from goolgle's viewform method instead of formresponse. The form i'm trying to submit is located at: http://spreadsheets.google.com/viewform?hl=en&formkey=dFd0SHgwQ3YwSFd5UHZpM1QxMlNOdlE6MA How can I go about injecting the script values into that form and then submitting that - again...via script within the bookmarklet that would have been triggered while on the page being parsed?

    Read the article

  • How to get an element using inner text (Watir, Nokogir, Hpricot)

    - by Hpriguy
    I have been expeirmenting with Watir, Nokogir and Hpricot. All of these use top-down approach which is my problem. i.e. they use element type to search element. I want to find out the element using the text without knowing element type. e.g. <element1> <element2> Text2 </element2> <element3> Text3 </element3> text4 </element1> I want is to get element2 and element1 etc by searching for Text2 and Text3. Please note that I do not know if elements are divs or tr/tds or links etc. I just know the text. Algorithem should be something like : iterated through all the elements, match inner text, if match get me the element and the parent element. Let me kow if this is possible in any way?

    Read the article

  • Replace text in word textbox objects using VSTO and C#

    - by Roberto
    Hi, I have to find/replace text from a word document. It works fine for plain text spread through the document, however when the text is in a textbox, the standard find/replace approach doesn't reach it. I found a vba solution, however since I am working in C#, I would like to find a solution in C#. The word document is in 2007 format and my visual studio is 2010. I am using .Net Framework 3.5, but if required, I can consider moving to 4.0. Here is the code for the find/replace that only works with plain text (not in word textbox objects): object Missing = System.Reflection.Missing.Value; object fileToOpen = (object)@"c:\doc.docx"; object fileToSave = (object)@"c:\doc.docx"; Word.Application app = new Word.ApplicationClass(); Word.Document doc = new Word.Document(); try { doc = app.Documents.Open(ref fileToOpen, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing); object replaceAll = Word.WdReplace.wdReplaceAll; app.Selection.Find.ClearFormatting(); app.Selection.Find.Text = "MyTextForReplacement"; app.Selection.Find.Replacement.ClearFormatting(); app.Selection.Find.Replacement.Text = "Found you!"; app.Selection.Find.Execute( ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref Missing, ref replaceAll, ref Missing, ref Missing, ref Missing, ref Missing); I also tried using the below code, but didn't work as well: foreach(Word.Shape s in app.ActiveDocument.Shapes) { if(s.TextFrame.HasText >= 1) //The value is always 0 or -1, and even leaving 0 go forward, // it doesn't work, because there is no text in there... { foreach(Word.Field f in s.TextFrame.TextRange.Fields) { switch( f.Type) { . . //I never reached this point . } } } Any help will be appreciated... Thanks, -- Roberto Lopes

    Read the article

  • Project ideas for automated deduction/automated theorem proving?

    - by wsh
    Dear Stack Overflow brethren, I'm a second-semester junior who will embark upon my thesis soon, and I have an interest in automated deduction and automated theorem provers. As in, I'd like to advance the art in some way (I don't mean that pretentiously, but I do want to do something productive). I've Googled pretty far and wide and so far few promising ideas have emerged. There are a few student project idea pages, but most seem either horribly outdated or too advanced (I was originally going to attempt to synthesize postmodernist thought (hahaha) and abstract its logical content, build a complete and consistent model (if possible, of course), and attempt to automate it, grounding said model as possible in a nonstandard logic a la these. My advisor thought that gave postmodernist thought too much credit (a while ago I reimplemented the Postmodernism Generator in Haskell with Parsec, so that is in part where the idea came from); I am tempted to concur.) So, yeah. Does anyone have ideas? I apologize if there is some obvious gap in my approach here/if I haven't appropriately done my homework (and if there is one, please tell me!), but in large part I don't even know where to start, and thank you for reading all that.

    Read the article

  • Are SharePoint site templates really less performant than site definitions?

    - by Jim
    So, it seems in the SharePoint blogosphere that everybody just copies and pastes the same bullet points from other blogs. One bullet point I've seen is that SharePoint site templates are less performant than site definitions because site definitions are stored on the file system. Is that true? It seems odd that site templates would be less performant. It's my understanding that all site content lives in a database, whether you use a site template or a site definition. A site template is applied once to the database, and from then on the site should not care if the content was created using a site template or not. So, does anybody have an architectural reason why a site template would be less performant than a site definition? Edit: Links to the blogs that say there is a performance difference: From MSDN: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. From DevX: However, user templates in SharePoint can lead to performance problems and may not be the best approach if you're trying to create a set of reusable templates for an entire organization. From IT Footprint: Because it is slow to store templates in and retrieve them from the database, site templates can result in slower performance. Templates in the database are compiled and executed every time a page is rendered. From Branding SharePoint:Custom site definitions hold the following advantages over custom templates: Data is stored directly on the Web servers, so performance is typically better. At a minimum, I think the above articles are incomplete, and I think several are misleading based on what I know of SharePoints architecture. I read another blog post that argued against the performance differences, but I can't find the link.

    Read the article

  • What is the fastest way to pull a few element values out of XML files in Perl?

    - by Anon Guy
    I have a bunch of XML files that are about 1-2 megabytes in size. Actually, more than a bunch, there are millions. They're all well-formed and many are even validated against their schema (confirmed with libxml2). All were created by the same app, so they're in a consistent format (though this could theoretically change in the future). I want to check the values of one element in each file from within a Perl script. Speed is important (I'd like to take less than a second per file) and as noted I already know the files are well-formed. I am sorely tempted to simply 'open' the files in Perl and scan through until I see the element I am looking for, grab the value (which is near the start of the file), and close the file. On the other hand, I could use an XML parser (which might protect me from future changes to the XML formatting) but I suspect it will be slower than I'd like. Can anyone recommend an appropriate approach and/or parser? Thanks in advance.

    Read the article

  • Using jQuery to grab the content from CKEditor's iframe

    - by miCRoSCoPiC_eaRthLinG
    Hey guys, I have this custom written CMS that uses CKEditor *(FCKEditor v3) for editing content. Along with that I'm using the jQuery Validation plugin to check all fields for error prior to ajax-based submission. For passing on the data to the PHP backend, I'm using the serialize() function. Problem is, serialize manages to grab all other fields correctly, except for the actual content typed in CKEditor. Like every other WYSIWYG editor, this one too overlays an iframe over an existing textbox. And serialize ignores the iframe and looks only into the textbox for content... which of course, it doesn't find - thus returning a blank content body. My approach to this is to create a hook onto the onchange event of CKEditor and concurrently update the textbox (CKEDITOR.instances.[textboxname].getData() returns the content) or some other hidden field with any changes made in the editor. However, since CKEditor is still in it's beta stage and severely lacks documentation, I cannot seem to find a suitable API call that'll enable me to do so. Does anyone have any idea on how to go about this? Or maybe suggest even a better method?

    Read the article

  • VB.Net Custom Object Master-Detail Data Binding

    - by clawson
    Since beginning to use VB.Net some years ago I have become slowly familiar with using the data binding features of .Net, however I often find my self bewildered by it's behavior and instead of discover the correct way it should work I find some dirty work around to suit my needs and continue on. Needless to say my problems continue to arise. I am using Custom Objects as the Data Sources for by controls and often entire forms. I find it frustrating to separate business logic and the graphical interface. (That could be a new question entirely.) So for a lot of objects I generate a form which has the DataBindingSource for the object. When I create each from using the New Constructor I explicitly pass to it the object to which it should be bound, and then set this passed object as the DataSource of the BindingSource. (That's a mouthful!) Now the Master object (say, bound to each form) often contains a List of objects which I like to have displayed in a DataGridView. I (sometimes) create and modify these child objects in their own form (again creating a databind the same way as the master form) but when I add them to the List in the master object the DataGridView won't update with the new items. So my question really has a few layers: How can I easily/efficiently/correctly update this DataGridView with the list of Detail objects when I add them to the list of the Master object. Is this approach to DataBinding good/viable. What's the best way to separate business logic from graphical interface. Thanks for the help!

    Read the article

  • ASP.NET MVC OutputCache with POST Controller Actions

    - by Maxim Z.
    I'm fairly new to using the OutputCache attribute in ASP.NET MVC. Static Pages I've enabled it on static pages on my site with code such as the following: [OutputCache(Duration = 7200, VaryByParam = "None")] public class HomeController : Controller { public ActionResult Index() { //... If I understand correctly, I made the whole controller cache for 7200 seconds (2 hours). Dynamic Pages However, how does it work with dynamic pages? By dynamic, I mean where the user has to submit a form. As an example, I have a page with an email form. Here's what that code looks like: public class ContactController : Controller { // // GET: /Contact/ public ActionResult Index() { return RedirectToAction("SubmitEmail"); } public ActionResult SubmitEmail() { //In view for CAPTCHA: <%= Html.GenerateCaptcha() %> return View(); } [CaptchaValidator] [AcceptVerbs(HttpVerbs.Post)] public ActionResult SubmitEmail(FormCollection formValues, bool captchaValid) { //Validate form fields, send email if everything's good... if (isError) { return View(); } else { return RedirectToAction("Index", "Home"); } } public void SendEmail(string title, string name, string email, string message) { //Send an email... } } What would happen if I applied OutputCache to the whole controller here? Would the HTTP POST form submission work? Also, my form has a CAPTCHA; would that change anything in the equation? In other words, what's the best way to approach caching with dynamic pages? Thanks in advance.

    Read the article

  • Duration of Excessive GC Time in "java.lang.OutOfMemoryError: GC overhead limit exceeded"

    - by jilles de wit
    Occasionally, somewhere between once every 2 days to once every 2 weeks, my application crashes in a seemingly random location in the code with: java.lang.OutOfMemoryError: GC overhead limit exceeded. If I google this error I come to this SO question and that lead me to this piece of sun documentation which expains: The parallel collector will throw an OutOfMemoryError if too much time is being spent in garbage collection: if more than 98% of the total time is spent in garbage collection and less than 2% of the heap is recovered, an OutOfMemoryError will be thrown. This feature is designed to prevent applications from running for an extended period of time while making little or no progress because the heap is too small. If necessary, this feature can be disabled by adding the option -XX:-UseGCOverheadLimit to the command line. Which tells me that my application is apparently spending 98% of the total time in garbage collection to recover only 2% of the heap. But 98% of what time? 98% of the entire two weeks the application has been running? 98% of the last millisecond? I'm trying to determine a best approach to actually solving this issue rather than just using -XX:-UseGCOverheadLimit but I feel a need to better understand the issue I'm solving.

    Read the article

  • Date since 1600 to NSDate?

    - by Steven Fisher
    I have a date that's stored as a number of days since January 1, 1600 that I need to deal with. This is a legacy date format that I need to read many, many times in my application. Previously, I'd been creating a calendar, empty date components and root date like this: self.gregorian = [[[NSCalendar alloc] initWithCalendarIdentifier: NSGregorianCalendar ] autorelease]; id rootComponents = [[[NSDateComponents alloc] init] autorelease]; [rootComponents setYear: 1600]; [rootComponents setMonth: 1]; [rootComponents setDay: 1]; self.rootDate = [gregorian dateFromComponents: rootComponents]; self.offset = [[[NSDateComponents alloc] init] autorelease]; Then, to convert the integer later to a date, I use this: [offset setDay: theLegacyDate]; id eventDate = [gregorian dateByAddingComponents: offset toDate: rootDate options: 0]; (I never change any values in offset anywhere else.) The problem is I'm getting a different time for rootDate on iOS vs. Mac OS X. On Mac OS X, I'm getting midnight. On iOS, I'm getting 8:12:28. (So far, it seems to be consistent about this.) When I add my number of days later, the weird time stays. OS | legacyDate | rootDate | eventDate ======== | ========== | ==========================|========================== Mac OS X | 143671 | 1600-01-01 00:00:00 -0800 | 1993-05-11 00:00:00 -0700 iOS | 143671 | 1600-01-01 08:12:28 +0000 | 1993-05-11 07:12:28 +0000 In the previous release of my product, I didn't care about the time; now I do. Why the weird time on iOS, and what should I do about it? (I'm assuming the hour difference is DST.) I've tried setting the hour, minute and second of rootComponents to 0. This has no impact. If I set them to something other than 0, it adds them to 8:12:28. I've been wondering if this has something to do with leap seconds or other cumulative clock changes. Or is this entirely the wrong approach to use on iOS?

    Read the article

  • OpenCV: Shift/Align face image relative to reference Image (Image Registration)

    - by Abhischek
    I am new to OpenCV2 and working on a project in emotion recognition and would like to align a facial image in relation to a reference facial image. I would like to get the image translation working before moving to rotation. Current idea is to run a search within a limited range on both x and y coordinates and use the sum of squared differences as error metric to select the optimal x/y parameters to align the image. I'm using the OpenCV face_cascade function to detect the face images, all images are resized to a fixed (128x128). Question: Which parameters of the Mat image do I need to modify to shift the image in a positive/negative direction on both x and y axis? I believe setImageROI is no longer supported by Mat datatypes? I have the ROIs for both faces available however I am unsure how to use them. void alignImage(vector<Rect> faceROIstore, vector<Mat> faceIMGstore) { Mat refimg = faceIMGstore[1]; //reference image Mat dispimg = faceIMGstore[52]; // "displaced" version of reference image //Rect refROI = faceROIstore[1]; //Bounding box for face in reference image //Rect dispROI = faceROIstore[52]; //Bounding box for face in displaced image Mat aligned; matchTemplate(dispimg, refimg, aligned, CV_TM_SQDIFF_NORMED); imshow("Aligned image", aligned); } The idea for this approach is based on Image Alignment Tutorial by Richard Szeliski Working on Windows with OpenCV 2.4. Any suggestions are much appreciated.

    Read the article

  • Altering URLs and mapping - path_prefix? - Ruby on Rails...

    - by bgadoci
    Ok, so I am working on a blog application of sorts. Thus far, it allows for a user to sign up for their own account, create posts, tags, comments, etc. I have just implemented the ability to use www.myapp.com/brandon to set @user to find by username and therefore correctly display the users information at each url. So when you go to www.myapp.com/brandon you see all Brandon's posts, tags, and comments associated with those posts, etc. Works great. I implementing this URL mapping through the routes.rb file by adding the following: map.username_link '/:username', :controller => 'posts', :action => 'index' And then just setting the @user variable in the PostController and corresponding views to find_by_username. Now the issue is this. Once at www.myapp.com/brandon when you click on a post title, it sends to myapp.com/posts/id without the username in the URL which blows up my view because nothing is being set for the @user variable. How do I tell Rails to create the link reading www.myapp.com/brandon/posts/id vs www.myapp.com/posts/id and then map that action? I am assuming this will involve some code in the view, and then adding another line in the routes.rb file, map.subdomain_link '/:username/posts/:id', :controller => 'posts', :action => 'show' and adding the @user variable to the PostController#show action, but not totally sure if this is even the right approach. UPDATE: I have been reading about :path_prefix and seems like it might be what I am looking for. I have tried to implement simply by adding map.resources :posts, :path_prefix => '/:user_username/:post_id' to the routes.rb file but not working (even after server restart). I am sure this is not correct but wanted to let you know what I have tried.

    Read the article

  • Best Practice for Shell Layout and Switching Views - Prism, SL4, On Demand Load Modules

    - by kmacmahon
    I am learning Prism, and I have a question on the best approach for the main Shell. Assuming the Shell has 2 regions: Toolbar, Main. The toolbar has 3 main buttons that each represent a different On Demand Load Module. Each of these modules currently register themselves as fitting in the Main Region. When I click one of the buttons I want to do the following: Notify any active view that its switching, with an option to cancel if there is a pending action still required. This might cascade to child views. If the action isn't cancelled then load the on demand module if it has not yet been loaded, else activate it within the region. Should these three modules all fit in the same Region or should my shell have 3 regions defined within content presenters? One of the areas I got stuck on was that when you register a view from the Module Initialize, it doesn't get added with a strongly typed name, so when I tried to determine if my view was already added to the region with GetView(viewname) it always returns null, so I end up adding another view to the region.

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • Inheritance mapping with Fluent NHibernate

    - by Berryl
    Below is an example of how I currently use automapping overrides to set up a my db representation of inheritance. It gets the job done functionality wise BUT by using some internal default values. For example, the discriminator column name winds up being the literal value 'discriminator' instead of "ActivityType, and the discriminator values are the fully qualified type of each class, instead of "ACCOUNT" and "PROJECT". I am guessing that this is a bug that doesn't get much attention now that conventions are preferred, and that the convention approach works correctly. I am looking for a sample of usage. Cheers, Berryl public class ActivityBaseMap : IAutoMappingOverride<ActivityBase> { public void Override(AutoMapping<ActivityBase> mapping) { ... mapping.DiscriminateSubClassesOnColumn("ActivityType"); } } public class AccountingActivityMap : SubclassMap<AccountingActivity> { public AccountingActivityMap() { ... DiscriminatorValue("ACCOUNT"); } } public class ProjectActivityMap : SubclassMap<ProjectActivity> { public ProjectActivityMap() { ... DiscriminatorValue("PROJECT"); } }

    Read the article

  • Cleanest RESTful design for purely "action" calls?

    - by Josh Handel
    Hello all, I am sticking my toe in the RESTful waters and I just can't find a "satisfactory" solution to how to handle truely "action" oriented calls on a RESTful service? My quandry can be broken down into two parts. 1) Transactional calls: I understand the idea of having an ActionTransactor that you get a resource too with a post, update the parameters and then commit with a PUT (as described all over the place and in the Orilly RESTful Web services book).. But I struggle with the idea of keeping URLs with states present for ever.. If we really honestly don't need to keep a transaction for ever can we kill the resource URI? do URIs need to be perminate or can they be transiant URIs that expire 2) Non transactional calls: these might be calls to perform some workflow that spans multiple resources but having a resource just doesn't make since.. An example might be to re-generating some calculated ans cached value like a large aggreget or re-indexing blog or some such "purely" action. Anyways, I'm curious about the communities thoughts on this... Thus far, I've read that Overloading Post is the cleanest way to handle part 2.. But there is an equal amount of argument against that approach as well. And (to me) its not self documenting which I though was one of the key design goals of RESTful APIs.

    Read the article

  • .NET EventHandlers - Generic or no?

    - by Chris Marasti-Georg
    Every time I start in deep in a C# project, I end up with lots of events that really just need to pass a single item. I stick with the EventHandler/EventArgs practice, but what I like to do is have something like: public delegate void EventHandler<T>(object src, EventArgs<T> args); public class EventArgs<T>: EventArgs { private T item; public EventArgs(T item) { this.item = item; } public T Item { get { return item; } } } Later, I can have my public event EventHandler<Foo> FooChanged; public event EventHandler<Bar> BarChanged; However, it seems that the standard for .NET is to create a new delegate and EventArgs subclass for each type of event. Is there something wrong with my generic approach? EDIT: The reason for this post is that I just re-created this in a new project, and wanted to make sure it was ok. Actually, I was re-creating it as I posted. I found that there is a generic EventHandler<TEventArgs, so you don't need to create the generic delegate, but you still need the generic EventArgs<T class, because TEventArgs: EventArgs. Another EDIT: One downside (to me) of the built-in solution is the extra verbosity: public event EventHandler<EventArgs<Foo>> FooChanged; vs. public event EventHandler<Foo> FooChanged; It can be a pain for clients to register for your events though, because the System namespace is imported by default, so they have to manually seek out your namespace, even with a fancy tool like Resharper... Anyone have any ideas pertaining to that?

    Read the article

  • C# Switch-case Loop for Datagridview cells

    - by Nail Yener
    Hi, I am working on a form with datagridview and webbrowser controls. I have three columns as URL, username and password in datagridview. What I want to do is to automate the login for some websites that I use frequently. For that reason I am not sure if this is the right approach but I created the below code. The problem is with the argument of switch. I will click the row on datagridview and then click the login_button so that the username and password info will be passed to the related fields on the webpage. Why I need a switch-case loop is because all the webpages have different element IDs for username and password fields. As I said, I am not sure if datagridview allows switch-case, I searched the net but couldn't find any samples. private void login_button_Click(object sender, EventArgs e) { switch (dataGridView1.Rows[e.RowIndex].Cells[e.ColumnIndex].Value.ToString()) { case "http://www.website1.com": webBrowser1.Document.GetElementById("username").InnerText = dataGridView1.Rows[3].Cells[3].Value.ToString(); webBrowser1.Document.GetElementById("password").InnerText = dataGridView1.Rows[3].Cells[4].Value.ToString(); return; case "http://www.website2.com": webBrowser1.Document.GetElementById("uname").InnerText = dataGridView1.Rows[4].Cells[3].Value.ToString(); webBrowser1.Document.GetElementById("pswd").InnerText = dataGridView1.Rows[4].Cells[4].Value.ToString(); return; } HtmlElementCollection elements = this.webBrowser1.Document.GetElementsByTagName("Form"); foreach (HtmlElement currentElement in elements) { currentElement.InvokeMember("Login"); } }

    Read the article

  • Help matching fields between two classes

    - by Michael
    I'm not too experienced with Java yet, and I'm hoping someone can steer me in the right direction because right now I feel like I'm just beating my head against a wall... The first class is called MeasuredParams, and it's got 40+ numeric fields (height, weight, waistSize, wristSize - some int, but mostly double). The second class is a statistical classifier called Classifier. It's been trained on a subset of the MeasuredParams fields. The names of the fields that the Classifier has been trained on is stored, in order, in an array called reqdFields. What I need to do is load a new array, toClassify, with the values stored in the fields from MeasuredParams that match the field list (including order) found in reqdFields. I can make any changes necessary to the MeasuredParams class, but I'm stuck with Classifier as it is. My brute-force approach was to get rid of the fields in MeasuredParams and use an arrayList instead, and store the field names in an Enum object to act as an index pointer. Then loop through the reqdFields list, one element at a time, and find the matching name in the Enum object to find the correct position in the arrayList. Load the value stored at that positon into toClassify, and then continue on to the next element in reqdFields. I'm not sure how exactly I would search through the Enum object - it would be a lot easier if the field names were stored in a second arrayList. But then the index positions between the two would have to stay matched, and I'm back to using an Enum. I think. I've been running around in circles all afternoon, and I keep thinking there must be an easier way of doing it. I'm just stuck right now and can't see past what I've started. Any help would be GREATLY appreciated. Thanks so much! Michael

    Read the article

  • Objective-C Protocols within Protocols

    - by LucasTizma
    I recently began trying my hand at using protocols in my Objective-C development as an (obvious) means of delegating tasks more appropriately among my classes. I completely understand the basic notion of protocols and how they work. However, I came across a roadblock when trying to create a custom protocol that in turn implements another protocol. I since discovered the solution, but I am curious why the following DOES NOT work: @protocol STPickerViewDelegate < UIPickerViewDelegate > - ( void )customCallback; @end @interface STPickerView : UIPickerView { id < STPickerViewDelegate > delegate; } @property ( nonatomic, assign ) id < STPickerViewDelegate > delegate; @end Then in a view controller, which conforms to STPickerViewDelegate: STPickerView * pickerView = [ [ STPickerView alloc ] init ]; pickerView.delegate = self; - ( void )customCallback { ... } - ( NSString * )pickerView:( UIPickerView * )pickerView titleForRow:( NSInteger )row forComponent:( NSInteger )component { ... } The problem was that pickerView:titleForRow:forComponent: was never being called. On the other hand, customCallback was being called just fine, which isn't too surprising. I don't understand why STPickerViewDelegate, which itself conforms to UIPickerViewDelegate, does not notify my view controller when events from UIPickerViewDelegate are supposed to occur. Per my understanding of Apple's documentation, if a protocol (A) itself conforms to another protocol (B), then a class (C) that conforms to the first protocol (A) must also conform to the second protocol (B), which is exactly the behavior I want and expected. What I ended up doing was removing the id< STPickerViewDelegate > delegate property from STViewPicker and instead doing something like the following in my STViewPicker implementation where I want to evoke customCallback: if ( [ self.delegate respondsToSelector:@selector( customCallback ) ] ) { [ self.delegate performSelector:@selector( customCallback ) ]; } This works just fine, but I really am puzzled as to why my original approach did not work.

    Read the article

  • Specifying ASP.NET MVC attributes for auto-generated data models

    - by Lyubomyr Shaydariv
    Hello to everyone. I'm very new to ASP.NET MVC (as well as ASP.NET in general), and going to gain some knowledge for this technology, so I'm sorry I can ask some trivial questions. I have installed ASP.NET MVC 3 RC1 and I'm trying to do the following. Let's consider that I have a model that's completely auto-generated from a table using the "LINQ to SQL Classes" template in VS2010. The template generates 3 files (two .cs files and one .layout file respectively), and the generated partial class is expected to be used as an MVC model. Let's also consider, a single DB column, that's mapped into the model, may look like this: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { get { return this._Name; } set { if ( (this._Name != value) ) { // ... generated stuff goes here } } } The ASP.NET MVC engine also provides a beautiful declarative way to specify some additional stuff, like RequiredAttribute, DisplayNameAttribute and other nice attributes. But since the mapped model is a purely auto-genereated model, I've realized that I should not change the model manually, and specify the fields like: [Required] [DisplayName("Project name")] [StringLength(128)] [global::System.Data.Linq.Mapping.ColumnAttribute(Storage = "_Name", DbType = "VarChar(128)")] public string Name { ... though this approach works perfectly... until I change the model in the DBML-designer removing the ASP.NET MVC attributes automatically. So, how do I specify ASP.NET MVC attributes for the DBML models and their fields safely? Thanks in advance, and Merry Christmas.

    Read the article

  • Detecting browser capabilities and selective events for mouse and touch

    - by skidding
    I started using touch events for a while now, but I just stumbled upon quite a problem. Until now, I checked if touch capabilities are supported, and applied selective events based on that. Like this: if(document.ontouchmove === undefined){ //apply mouse events }else{ //apply touch events } However, my scripts stopped working in Chrome5 (which is currently beta) on my computer. I researched it a bit, and as I expected, in Chrome5 (as opposed to older Chrome, Firefox, IE, etc.) document.ontouchmove is no longer undefined but null. At first I wanted to submit a bug report, but then I realized: There are devices that have both mouse and touch capabilities, so that might be natural, maybe Chrome now defines it because my OS might support both types of events. So the solutions seems easy: Apply BOTH event types. Right? Well the problem now take place on mobile. In order to be backward compatible and support scripts that only use mouse events, mobile browsers might try to fire them as well (on touch). So then with both mouse and touch events set, a certain handler might be called twice every time. What is the way to approach this? Is there a better way to check and apply selective events, or must I ignore the problems that might occur if browsers fire both touch and mouse events at times?

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >