Search Results

Search found 44734 results on 1790 pages for 'model based design'.

Page 367/1790 | < Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • Building big, immutable objects without using constructors having long parameter lists

    - by Malax
    Hi StackOverflow! I have some big (more than 3 fields) Objects which can and should be immutable. Every time I run into that case i tend to create constructor abominations with long parameter lists. It doesn't feel right, is hard to use and readability suffers. It is even worse if the fields are some sort of collection type like lists. A simple addSibling(S s) would ease the object creation so much but renders the object mutable. What do you guys use in such cases? I'm on Scala and Java, but i think the problem is language agnostic as long as the language is object oriented. Solutions I can think of: "Constructor abominations with long parameter lists" The Builder Pattern Thanks for your input!

    Read the article

  • When is lazy evaluation not useful?

    - by Cherian
    Delay execution is almost always a boon. But then there are cases when it’s a problem and you resort to “fetch” (in Nhibernate) to eager fetch it. Do you know practical situations when lazy evaluation can bite you back…?

    Read the article

  • Managing Cisco programatically; Telnet vs SNMP?

    - by MikeHerrera
    I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis. I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable. My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself. Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP. I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?

    Read the article

  • Validation without ServiceLocator

    - by Dmitriy Nagirnyak
    Hi, I am getting back again and again to it thinking about the best way to perform validation on POCO objects that need access to some context (ISession in NH, IRepository for example). The only option I still can see is to use S*ervice Locator*, so my validation would look like: public User : ICanValidate { public User() {} // We need this constructor (so no context known) public virtual string Username { get; set; } public IEnumerable<ValidationError> Validate() { if (ServiceLocator.GetService<IUserRepository>().FindUserByUsername(Username) != null) yield return new ValidationError("Username", "User already exists.") } } I already use Inversion Of control and Dependency Injection and really don't like the ServiceLocator due to number of facts: Harder to maintain implicit dependencies. Harder to test the code. Potential threading issues. Explicit dependency only on the ServiceLocator. The code becomes harder to understand. Need to register the ServiceLocator interfaces during the testing. But on the other side, with plain POCO objects, I do not see any other way of performing the validation like above without ServiceLocator and only using IoC/DI. So the question would be: is there any way to use DI/IoC for the situation described above? Thanks, Dmitriy.

    Read the article

  • Are web-safe colors still relevant?

    - by Gavin Miller
    Since the vast majority of monitors are 16-bit color or more, including mobile devices, does it make sense to even consider web-safe colors when choosing color schemes? Or is it something that ought to be relegated to history as a piece of trivia? For those of you that don't know what web-safe colors are: Another set of 216 color values is commonly considered to be the "web-safe" color palette, developed at a time when many computer displays were only capable of displaying 256 colors. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six shades each of red, green, and blue (6 × 6 × 6 = 216). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. [Wikipedia]

    Read the article

  • Single value data to multiple values of data in database relation

    - by Sofiane Merah
    I have such a hard time picturing this. I just don't have the brain to do it. I have a table called reports. --------------------------------------------- | report_id | set_of_bads | field1 | field2 | --------------------------------------------- | 123 | set1 | qwe | qwe | --------------------------------------------- | 321 | 123112 | ewq | ewq | --------------------------------------------- I have another table called bads. This table contains a list of bad data. ------------------------------------- | bad_id | set_it_belongs_to | field2 | field3 | ------------------------------------- | 1 | set1 | qwe | qwe | ------------------------------------- | 2 | set1 | qee | tte | ------------------------------------- | 3 | set1 | q44w | 3qwe | ------------------------------------- | 4 | 234 | qoow | 3qwe | ------------------------------------- Now I have set the first field of every table as the primary key. My question is, how do I connect the field set_of_bads to set_it_belongs_to in the bads table. This way if I want to get the entire set of data that is set1 by calling on the reports table I can do it. Example: hey reports table.. bring up the row that has the report_id 123. Okay thank you.. Now get all the rows from bads that has the set_of_bads value from the row with the report_id 123. Thanks.

    Read the article

  • Best code structure for arcade games

    - by user280454
    Hi, I've developed a few arcade games so far and have the following structure in almost all of them: I have 1 file which contains a class called kernel with the following functions: init() welcome_screen() menu_screen() help_Screen() play_game() end_screen() And another file, called Game, which basically calls all these functions and controls the flow of the game. Also, I have classes for different characters, whenever required. Is this a good code structure, or should I try having different files(classes) for different functions like welcome screen, playing, help screen, etc? As in, instead of including all the code for these things in 1 file, should I be having different classes for each one of them? The only problem I think it might cause is that I would need certain variables like score, characters, etc which are common to all of them, that's why I have all these functions in a Kernel class so that all these functions can use these variables.

    Read the article

  • SO what RDF database do i use?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) thank you.

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • Storing varchar(max) & varbinary(max) together - Problem?

    - by Tony Basallo
    I have an app that will have entries of both varchar(max) and varbinary(max) data types. I was considering putting these both in a separate table, together, even if only one of the two will be used at any given time. The question is whether storing them together has any impact on performance. Considering that they are stored in the heap, I'm thinking that having them together will not be a problem. However, the varchar(max) column will be probably have the text in row table option set. I couldn't find any performance testing or profiling while "googling bing," probably too specific a question? The SQL Server 2008 table looks like this: Id ParentId Version VersionDate StringContent - varchar(max) BinaryContent - varbinary(max) The app will decide which of the two columns to select for when the data is queried. The string column will much used much more frequently than the binary column - will this have any impact on performance?

    Read the article

  • Questions and considerations to ask client for designing a database

    - by Julia
    Hi guys! so as title says, I would like to hear your advices what are the most important questions to consider and ask end-users before designing database for their application. We are to make database-oriented app, with special attenion to pay on db security (access control, encryption, integrity, backups)... Database will also keep some personal information about people, which is considered sensitive by law regulations, so security must be good. I worked on school projects with databases, but this is first time working "in real world", where this db security has real implications. So I found some advices and questions to ask on internet, but here I always get best ones. All help appreciated! Thank you!

    Read the article

  • Best practices for managing limited client licenses/login

    - by MicSim
    I have a multi-user software solution (containing different applications, i.e. EXEs) that should allow only a limited number of concurrent users. It's designed to run in an intranet. I dont have a really good, satisfactory solution to the problem of counting the client licenses yet. The key requirements are: Multiple instances (starts) of the same application (= process) should count as only one client licence Starting different applications of the software solution should also count as only one (the same) client licence Application crash should not lead to orphaned used licences The above should work also for Terminal Server environments (all clients same IP, but different install folders) I'm looking for estabilished patterns, solutions, tips for managing used client licenses. Specific hints for the above sitaution are also welcome.

    Read the article

  • Problem with jQuery and ASP.Net MVC

    - by robert_d
    I have a problem with jQuery, here is how my web app works Search web page which contains a form and jQuery script posts data to Search() action in Home controller after user clicks button1 button. Search.aspx: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<GLSChecker.Models.WebGLSQuery>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Title </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Search</h2> <% Html.EnableClientValidation(); %> <% using (Html.BeginForm()) {%> <fieldset> <div class="editor-label"> <%: Html.LabelFor(model => model.Url) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Url, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Url) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.Location) %> </div> <div class="editor-field"> <%: Html.TextBoxFor(model => model.Location, new { size = "50" } ) %> <%: Html.ValidationMessageFor(model => model.Location) %> </div> <div class="editor-label"> <%: Html.LabelFor(model => model.KeywordLines) %> </div> <div class="editor-field"> <%: Html.TextAreaFor(model => model.KeywordLines, 10, 60, null)%> <%: Html.ValidationMessageFor(model => model.KeywordLines)%> </div> <p> <input id ="button1" type="submit" value="Search" /> </p> </fieldset> <% } %> <script src="../../Scripts/jquery-1.4.1.js" type="text/javascript"></script> <script type="text/javascript"> jQuery("#button1").click(function (e) { window.setInterval(refreshResult, 5000); }); function refreshResult() { jQuery("#divResult").load("/Home/Refresh"); } </script> <div id="divResult"> </div> </asp:Content> [HttpPost] public ActionResult Search(WebGLSQuery queryToCreate) { if (!ModelState.IsValid) return View("Search"); queryToCreate.Remote_Address = HttpContext.Request.ServerVariables["REMOTE_ADDR"]; Session["Result"] = null; SearchKeywordLines(queryToCreate); Thread.Sleep(15000); return View("Search"); }//Search() After button1 button is clicked the above script from Search web page runs Search() action in controller runs for longer period of time. I simulate this in testing by putting Thread.Sleep(15000); in Search()action. 5 sec. after Submit button was pressed, the above jQuery script calls Refresh() action in Home controller. public ActionResult Refresh() { ViewData["Result"] = DateTime.Now; return PartialView(); } Refresh() renders this partial <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" % <%= ViewData["Result"] % The problem is that in Internet Explorer 8 there is only one request to /Home/Refresh; in Firefox 3.6.3 all requests to /Home/Refresh are made but nothing is displayed on the web page. I would be grateful for helpful suggestions.

    Read the article

  • Algorithm to match list of regular expressions

    - by DSII
    I have two algorithmic questions for a project I am working on. I have thought about these, and have some suspicions, but I would love to hear the community's input as well. Suppose I have a string, and a list of N regular expressions (actually they are wildcard patterns representing a subset of full regex functionality). I want to know whether the string matches at least one of the regular expressions in the list. Is there a data structure that can allow me to match the string against the list of regular expressions in sublinear (presumably logarithmic) time? This is an extension of the previous problem. Suppose I have the same situation: a string and a list of N regular expressions, only now each of the regular expressions is paired with an offset within the string at which the match must begin (or, if you prefer, each of the regular expressions must match a substring of the given string beginning at the given offset). To give an example, suppose I had the string: This is a test string and the regex patterns and offsets: (a) his.* at offset 0 (b) his.* at offset 1 The algorithm should return true. Although regex (a) does not match the string beginning at offset 0, regex (b) does match the substring beginning at offset 1 ("his is a test string"). Is there a data structure that can allow me to solve this problem in sublinear time? One possibly useful piece of information is that often, many of the offsets in the list of regular expressions are the same (i.e. often we are matching the substring at offset X many times). This may be useful to leverage the solution to problem #1 above. Thank you very much in advance for any suggestions you may have!

    Read the article

  • Why "alt" attribute for <img> tag has been considered mandatory by the HTML validator .. ?

    - by infant programmer
    Is there any logical or technical reason (with the W3C validation) for making alt as required attribute .. This is my actual problem:though my page is perfect enough with respect to W3C validation rules .. Only error I am getting is line XX column YY - Error: required attribute "ALT" not specified I know the significance of "alt" attribute and I have omitted that where it is unnecessary .. (to be more elaborate .. I have added the image to increase the beauty of my page and I don't want alt attribute to show irrelevant message to the viewer) getting rid of the error is secondary .. rather I am curious to know whether is it a flaw with validation rules .. ?? I thank stackOverflow and all the members who responded me .. I got my doubt clarified .. :-)

    Read the article

  • Optimize mysql table ?

    - by fabien-barbier
    Here is my actual table schema (I'm using Mysql) : Table experiment : code(int) sample_1_id sample_2_id ... until ... sample_12_id rna_1_id rna_2_id ... until ... rna_12_id experiment_start How can I optimize both part : sample_n_id and rna_n_id (all are bigint(20) and allow null=true) ? About values : we can have : ex : sample_1_id = 2 , Sample_2_id = 5 , ... Note : values can be updated. Ideas ? Thanks.

    Read the article

  • Implementation of a general-purpose object structure (property bag)

    - by Thomas Wanner
    We need to implement some general-purpose object structure, much like an object in dynamic languages, that would give us a possibility of creating the whole object graph on-the-fly. This class has to be serializable and somehow user friendly. So far we have made some experiments with class derived from Dictionary<string, object> using the dot notation path to store properties and collections in the object tree. We have also find an article that implements something similar, but it doesn't seem to fit completely into our picture either. Do you know about some good implementations / libraries that deal with a similar problem or do you have any (non-trivial) ideas that could help us with our own implementation ? Also, I probably have to say that we are using .NET 3.5, so we can't take advantage of the new features in .NET 4.0 like dynamic type etc. (as far as I know it's also not possible to use any subset of it in .NET 3.5 solution).

    Read the article

  • GOTO still considered harmful?

    - by Kyle Cronin
    Everyone is aware of Dijkstra's Letters to the editor: go to statement considered harmful (also here .html transcript and here .pdf) and there has been a formidable push since that time to eschew the goto statement whenever possible. While it's possible to use goto to produce unmaintainable, sprawling code, it nevertheless remains in modern programming languages. Even the advanced continuation control structure in Scheme can be described as a sophisticated goto. What circumstances warrant the use of goto? When is it best to avoid? As a followup question: C provides a pair of functions, setjmp and longjmp, that provide the ability to goto not just within the current stack frame but within any of the calling frames. Should these be considered as dangerous as goto? More dangerous? Dijkstra himself regretted that title, of which he was not responsible for. At the end of EWD1308 (also here .pdf) he wrote: Finally a short story for the record. In 1968, the Communications of the ACM published a text of mine under the title "The goto statement considered harmful", which in later years would be most frequently referenced, regrettably, however, often by authors who had seen no more of it than its title, which became a cornerstone of my fame by becoming a template: we would see all sorts of articles under the title "X considered harmful" for almost any X, including one titled "Dijkstra considered harmful". But what had happened? I had submitted a paper under the title "A case against the goto statement", which, in order to speed up its publication, the editor had changed into a "letter to the Editor", and in the process he had given it a new title of his own invention! The editor was Niklaus Wirth. A well thought out classic paper about this topic, to be matched to that of Dijkstra, is Structured Programming with go to Statements (also here .pdf), by Donald E. Knuth. Reading both helps to reestablish context and a non-dogmatic understanding of the subject. In this paper, Dijkstra's opinion on this case is reported and is even more strong: Donald E. Knuth: I believe that by presenting such a view I am not in fact disagreeing sharply with Dijkstra's ideas, since he recently wrote the following: "Please don't fall into the trap of believing that I am terribly dogmatical about [the go to statement]. I have the uncomfortable feeling that others are making a religion out of it, as if the conceptual problems of programming could be solved by a single trick, by a simple form of coding discipline!"

    Read the article

  • Confused about this factory, as it doesn't look like an Abstract Factory nor Factory Method

    - by Pin
    I'm looking into Guice and I've been reading its documentation recently. Reading the motivation section I don't understand the factories part, why they name it that way. To me that factory is just a wrapper for the implementing class they want it to return after calling getInstance(). public class CreditCardProcessorFactory { private static CreditCardProcessor instance; public static void setInstance(CreditCardProcessor creditCardProcessor) { instance = creditCardProcessor; } public static CreditCardProcessor getInstance() { if (instance == null) { throw new IllegalStateException("CreditCardProcessorFactory not initialized. " + "Did you forget to call CreditCardProcessor.setInstance() ?"); } return instance; } } Why do they call it factory as well if it is neither an abstract factory nor a factory method? Or am I missing something? Thanks.

    Read the article

  • To subclass or not to subclass

    - by poulenc
    I have three objects; Action, Issue and Risk. These all contain a nunber of common variables/attributes (for example: Description, title, Due date, Raised by etc.) and some specific fields (risk has probability). The question is: Should I create 3 separate classes Action, Risk and Issue each containing the repeat fields. Create a parent class "Abstract_Item" containing these fields and operations on them and then have Action, Risk and Issue subclass Abstract_Item. This would adhere to DRY principal.

    Read the article

  • Relational vs. Dimensional Databases, what's the difference?

    - by grautur
    I'm trying to learn about OLAP and data warehousing, and I'm confused about the difference between relational and dimensional modeling. Is dimensional modeling basically relational modeling, but allowing for redundant/un-normalized data? For example, let's say I have historical sales data on (product, city, # sales). I understand that the following would be a relational point-of-view: Product | City | # Sales Apples, San Francisco, 400 Apples, Boston, 700 Apples, Seattle, 600 Oranges, San Francisco, 550 Oranges, Boston, 500 Oranges, Seattle, 600 While the following is a more dimensional point-of-view: Product | San Francisco | Boston | Seattle Apples, 400, 700, 600 Oranges, 550, 500, 600 But it seems like both points of view would nonetheless be implemented in an identical star schema: Fact table: Product ID, Region ID, # Sales Product dimension: Product ID, Product Name City dimension: City ID, City Name And it's not until you start adding some additional details to each dimension that the differences start popping up. For instance, if you wanted to track regions as well, a relational database would tend to have a separate region table, in order to keep everything normalized: City dimension: City ID, City Name, Region ID Region dimension: Region ID, Region Name, Region Manager, # Regional Stores While a dimensional database would allow for denormalization to keep the region data inside the city dimension, in order to make it easier to slice the data: City dimension: City ID, City Name, Region Name, Region Manager, # Regional Stores Is this correct?

    Read the article

< Previous Page | 363 364 365 366 367 368 369 370 371 372 373 374  | Next Page >