Search Results

Search found 14602 results on 585 pages for 'objected oriented design'.

Page 210/585 | < Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >

  • How to add an "Export to ebook" feature to a site?

    - by systempuntoout
    How could i add to a blog or site in general a feature that let users export the content to epub format or some other open ebook formats? It's not a feature that i normally see on most of the site i browse every day (some has export to pdf that is not great as ebook format), do you think it is feasible? I own an ebook reader and reading saved html to pdf pages is not so good. I'm searching for a general solution here so i have not specified any specific technology; if you have some sites that offer this feature i would like to try them. thanks

    Read the article

  • MVC pattern and (Game) State pattern

    - by topright
    Game States separate I/O processing, game logic and rendering into different classes: while (game_loop) { game->state->io_events(this); game->state->logic(this); game->state->rendering(); } You can easily change a game state in this approach. MVC separation works in more complex way: while (game_loop) { game->cotroller->io_events(this); game->model->logic(this); game->view->rendering(); } So changing Game States becomes error prone task (switch 3 classes, not 1). What are practical ways of combining these 2 concepts?

    Read the article

  • Advice on displaying and allowing editing of data using ASP.NET MVC?

    - by Remnant
    I am embarking upon my first ASP.NET MVC project and I would like to get some input on possible ways to display database data and general best practice. In short, the body of my webpage will show data from my database in a table like format, with each table row showing similar data. For example: Name Age Position Date Joined Jon Smith 23 Striker 18th Mar 2005 John Doe 38 Defender 3rd Jan 1988 In terms of functionality, primarily I’d like to give the user the ability to edit the data and, after the edit, commit the edit to the database and refresh the view.The reason I want to refresh the view is because the data is date ordered and I will need to re-sort if the user edits a date field. My main question is what architecture / tools would be best suited to this fulfil my requirements at a high level? From the research I have done so far my initial conclusions were: ADO.NET for data retrieval. This is something I have used before and feel comfortable with. I like the look of LINQ to SQL but don’t want to make the learning curve any steeper for my first outing into MVC land just yet. Partial Views to create a template and then iterate through a datatable that I have pulled back from my database model. jQuery to allow the user to edit data in the table, error check edited data entries etc. Also, my intial view was that caching the data would not be a key requirement here. The only field a user will be able to update is the field and, if they do, I will need to commit that data to the database immediately and then refresh the view (as the data is date sorted). Any thoughts on this? Alternatively, I have seen some jQuery plug-ins that emulate a datagrid and provide associated functionality. My first thoughts are that I do not need all the functionality that comes with these plug-ins (e.g. zebra striping, ability to sort by column using sort glyph in column headers etc .) and I don’t really see any benefit to this over and above the solution I have outlined above. Again, is there reason to reconsider this view? Finally, when a user edits a date , I will need to refresh the view. In order to do this I had been reading about Html.RenderAction and this seemed like it may be a better option than using Partial Views as I can incorporate application logic into the action method. Am I right to consider Html.RenderAction or have I misunderstood its usage? Hope this post is clear and not too long. I did consider separate posts for each topic (e.g. Partial View vs. Html.RenderAction, when to use jQury datagrid plug-in) but it feels like these issues are so intertwined that they need to be dealt with in contect of each other. Thanks

    Read the article

  • Graph colouring algorithm: typical scheduling problem

    - by newba
    Hi, I'm training code problems like UvA and I have this one in which I have to, given a set of n exams and k students enrolled in the exams, find whether it is possible to schedule all exams in two time slots. Input Several test cases. Each one starts with a line containing 1 < n < 200 of different examinations to be scheduled. The 2nd line has the number of cases k in which there exist at least 1 student enrolled in 2 examinations. Then, k lines will follow, each containing 2 numbers that specify the pair of examinations for each case above. (An input with n = 0 will means end of the input and is not to be processed). Output: You have to decide whether the examination plan is possible or not for 2 time slots. Example: Input: 3 3 0 1 1 2 2 0 9 8 0 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 Ouput: NOT POSSIBLE. POSSIBLE. I think the general approach is graph colouring, but I'm really a newb and I may confess that I had some trouble understanding the problem. Anyway, I'm trying to do it and then submit it. Could someone please help me doing some code for this problem? I will have to handle and understand this algo now in order to use it later, over and over. I prefer C or C++, but if you want, Java is fine to me ;) Thanks in advance

    Read the article

  • Tool for documentation of the fields of a database ?

    - by Jerome WAGNER
    Hello, I need to add documentation to all fields of 2 databases (1 postgresql & 1 sql server), around 100 tables each. What tool would be convenient to do that (reverse the schema + add documentation manually on all fields) ? My favors would go to an open source tool with a graphical & xml output. Thanks for your help Jerome WAGNER

    Read the article

  • How is jQuery so fast?

    - by ClarkeyBoy
    Hey, I have a rather large application which, on the admin frontend, takes a few seconds to load a page because of all the pageviews that it has to load into objects before displaying anything. Its a bit complex to explain how the system works, but a few of my other questions explains the system in great detail. The main difference between what they say and the current system is that the customer frontend no longer loads all the pageviews into objects when a customer first views the page - it simply adds the pageview to the database and creates an object in an unsynchronised list... to put it simply, when a customer views a page it no longer loads all the pageviews into objects; but the admin frontend still does. I have been working on some admin tools on the customer frontend recently, so if an administrator clicks the description of an item in the catalogue then the right hand column will display statistics and available actions for the selected item. To do this the page which gets loaded (through $('action-container').load(bla bla bla);) into the right hand column has to loop through ALL the pageviews - this ultimately means that ALL the pageviews are loaded into objects if they haven't been already. For some reason this loads really REALLY fast. The difference in speed is only like a second on my dev site, but the live site has thousands of pageviews so the difference is quite big... So my question is: why is it that the admin frontend loads so slowly while using $(bla).load(bla); is so fast? I mean whatever method jQuery uses, can't browsers use this method too and load pages super-fast? Obviously not as someone would've done that by now - but I am interested to know just why the difference is so big... is it just my system or is there a major difference in speed between the browser getting a page and jQuery getting a page? Do other people experience the same kind of differences? Thanks in advance, Regards, Richard

    Read the article

  • 7 Card Poker Hand Evaluator

    - by Peter
    Hey, Does anyone know a fast algorithm for evaluating 7 card poker hands? Something which is more efficient than simply brute-force checking a every 21 5-card combination of hands from a set of 7. Cheers, Pete

    Read the article

  • Are web-safe colors still relevant?

    - by Gavin Miller
    Since the vast majority of monitors are 16-bit color or more, including mobile devices, does it make sense to even consider web-safe colors when choosing color schemes? Or is it something that ought to be relegated to history as a piece of trivia? For those of you that don't know what web-safe colors are: Another set of 216 color values is commonly considered to be the "web-safe" color palette, developed at a time when many computer displays were only capable of displaying 256 colors. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six shades each of red, green, and blue (6 × 6 × 6 = 216). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. [Wikipedia]

    Read the article

  • recursive delete trigger and ON DELETE CASCADE contraints are not deleting everything

    - by bitbonk
    I have a very simple datamodel that represents a tree structure: The RootEntity is the root of such a tree, it can contain children of type ContainerEntity and of type AtomEntity. The type ContainerEntity again can contain children of type ContainerEntity and of type AtomEntity but can not contain children of type RootEntity. Children are referenced in a well known order. The DB model for this is below. My problem now is that when I delete a RootEntity I want all children to be deleted recursively. I have create foreign key with CASCADE DELETE and two delete triggers for this. But it is not deleting everything, it always leaves some items in the ContainerEntity, AtomEntity, ContainerEntity_Children and AtomEntity_Children tables. Seemling beginning with the recursionlevel of 3. CREATE TABLE RootEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_RootEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE ContainerEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_ContainerEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE AtomEntity ( Id UNIQUEIDENTIFIER NOT NULL, Name VARCHAR(500) NOT NULL, CONSTRAINT PK_AtomEntity PRIMARY KEY NONCLUSTERED (Id), ); CREATE TABLE RootEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_RootEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent RootEntity CONSTRAINT FK_RootEntiry_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES RootEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_RootEntiry_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_RootEntiry_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TABLE ContainerEntity_Children ( ParentId UNIQUEIDENTIFIER NOT NULL, OrderIndex INT NOT NULL, ChildContainerEntityId UNIQUEIDENTIFIER NULL, ChildAtomEntityId UNIQUEIDENTIFIER NULL, ChildIsContainerEntity BIT NOT NULL, CONSTRAINT PK_ContainerEntity_Children PRIMARY KEY NONCLUSTERED (ParentId, OrderIndex), -- foreign key to parent ContainerEntity CONSTRAINT FK_ContainerEntity_Children__RootEntity FOREIGN KEY (ParentId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) ContainerEntity CONSTRAINT FK_ContainerEntity_Children__ContainerEntity FOREIGN KEY (ChildContainerEntityId) REFERENCES ContainerEntity (Id) ON DELETE CASCADE, -- foreign key to referenced (child) AtomEntity CONSTRAINT FK_ContainerEntity_Children__AtomEntity FOREIGN KEY (ChildAtomEntityId) REFERENCES AtomEntity (Id) ON DELETE CASCADE, ); CREATE TRIGGER Delete_RootEntity_Children ON RootEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO CREATE TRIGGER Delete_ContainerEntiy_Children ON ContainerEntity_Children FOR DELETE AS DELETE FROM ContainerEntity WHERE Id IN (SELECT ChildContainerEntityId FROM deleted) DELETE FROM AtomEntity WHERE Id IN (SELECT ChildAtomEntityId FROM deleted) GO

    Read the article

  • How should I provide access to this custom DAL?

    - by Casey
    I'm writing a custom DAL (VB.NET) for an ordering system project. I'd like to explain how it is coded now, and receive some alternate ideas to make coding against the DAL easier/more readable. The DAL is part of an n-tier (not n-layer) application, where each tier is in it's own assembly/DLL. The DAL consists of several classes that have specific behavior. For instance, there is an Order class that is responsible for retrieving and saving orders. Most of the classes have only two methods, a "Get" and a "Save," with multiple overloads for each. These classes are marked as Friend and are only visible to the DAL (which is in it's own assembly). In most cases, the DAL returns what I will call a "Data Object." This object is a class that contains only data and validation, and is located in a common assembly that both the BLL and DAL can read. To provide public access to the DAL, I currently have a static (module) class that has many shared members. A simplified version looks something like this: Public Class DAL Private Sub New End Sub Public Shared Function GetOrder(OrderID as String) as OrderData Dim OrderGetter as New OrderClass Return OrderGetter.GetOrder(OrderID) End Function End Class Friend Class OrderClass Friend Function GetOrder(OrderID as string) as OrderData End Function End Class The BLL would call for an order like this: DAL.GetOrder("123456") As you can imagine, this gets cumbersome very quickly. I'm mainly interested in structuring access to the DAL so that Intellisense is very intuitive. As it stands now, there are too many methods/functions in the DAL class with similar names. One idea I had is to break down the DAL into nested classes: Public Class DAL Private Sub New End Sub Public Class Orders Private Sub New End Sub Public Shared Function Get(OrderID as string) as OrderData End Function End Class End Class So the BLL would call like this: DAL.Orders.Get("12345") This cleans it up a bit, but it leaves a lot of classes that only have references to other classes, which I don't like for some reason. Without resorting to passing DB specific instructions (like where clauses) from BLL to DAL, what is the best or most common practice for providing a single point of access for the DAL?

    Read the article

  • Delphi, how to make independent windows

    - by Roy M Klever
    I have an application that uses tabs like the Chrome browser. Now I want to be able to open more forms and not be limited to only one form. These forms should act the same but if I close main form all forms are closed. How can I make all forms be equal, so no matter which form I close it only closes that form and not exit application before all forms are closed? Any ideas? Kind Regards Roy M Klever

    Read the article

  • How would you organize a large complex web application (see basic example)?

    - by Anurag
    How do you usually organize complex web applications that are extremely rich on the client side. I have created a contrived example to indicate the kind of mess it's easy to get into if things are not managed well for big apps. Feel free to modify/extend this example as you wish - http://jsfiddle.net/NHyLC/1/ The example basically mirrors part of the comment posting on SO, and follows the following rules: Must have 15 characters minimum, after multiple spaces are trimmed out to one. If Add Comment is clicked, but the size is less than 15 after removing multiple spaces, then show a popup with the error. Indicate amount of characters remaining and summarize with color coding. Gray indicates a small comment, brown indicates a medium comment, orange a large comment, and red a comment overflow. One comment can only be submitted every 15 seconds. If comment is submitted too soon, show a popup with appropriate error message. A couple of issues I noticed with this example. This should ideally be a widget or some sort of packaged functionality. Things like a comment per 15 seconds, and minimum 15 character comment belong to some application wide policies rather than being embedded inside each widget. Too many hard-coded values. No code organization. Model, Views, Controllers are all bundled together. Not that MVC is the only approach for organizing rich client side web applications, but there is none in this example. How would you go about cleaning this up? Applying a little MVC/MVP along the way? Here's some of the relevant functions, but it will make more sense if you saw the entire code on jsfiddle: /** * Handle comment change. * Update character count. * Indicate progress */ function handleCommentUpdate(comment) { var status = $('.comment-status'); status.text(getStatusText(comment)); status.removeClass('mild spicy hot sizzling'); status.addClass(getStatusClass(comment)); } /** * Is the comment valid for submission */ function commentSubmittable(comment) { var notTooSoon = !isTooSoon(); var notEmpty = !isEmpty(comment); var hasEnoughCharacters = !isTooShort(comment); return notTooSoon && notEmpty && hasEnoughCharacters; } // submit comment $('.add-comment').click(function() { var comment = $('.comment-box').val(); // submit comment, fake ajax call if(commentSubmittable(comment)) { .. } // show a popup if comment is mostly spaces if(isTooShort(comment)) { if(comment.length < 15) { // blink status message } else { popup("Comment must be at least 15 characters in length."); } } // show a popup is comment submitted too soon else if(isTooSoon()) { popup("Only 1 comment allowed per 15 seconds."); } });

    Read the article

  • SO what RDF database do i use?

    - by keisimone
    Hi, i have a similar issue as espoused in http://stackoverflow.com/questions/695752/product-table-many-kinds-of-product-each-product-has-many-parameters i am convinced to use RDF now. but i already have a database in mysql and the code is in php. 1) So what RDF database should I use? 2) do i combine the approach? meaning i have a class table inheritance in the mysql database and just the weird product attributes in the RDF? I dont think i should move everything to a RDF database since it is only just products and the wide array of possible attributes and value that is giving me the problem. 3) what php resources, articles should i look at that will help me better in the creation of this? 4) thank you.

    Read the article

  • How to implement an offline reader writer lock

    - by Peter Morris
    Some context for the question All objects in this question are persistent. All requests will be from a Silverlight client talking to an app server via a binary protocol (Hessian) and not WCF. Each user will have a session key (not an ASP.NET session) which will be a string, integer, or GUID (undecided so far). Some objects might take a long time to edit (30 or more minutes) so we have decided to use pessimistic offline locking. Pessimistic because having to reconcile conflicts would be far too annoying for users, offline because the client is not permanently connected to the server. Rather than storing session/object locking information in the object itself I have decided that any aggregate root that may have its instances locked should implement an interface ILockable public interface ILockable { Guid LockID { get; } } This LockID will be the identity of a "Lock" object which holds the information of which session is locking it. Now, if this were simple pessimistic locking I'd be able to achieve this very simply (using an incrementing version number on Lock to identify update conflicts), but what I actually need is ReaderWriter pessimistic offline locking. The reason is that some parts of the application will perform actions that read these complex structures. These include things like Reading a single structure to clone it. Reading multiple structures in order to create a binary file to "publish" the data to an external source. Read locks will be held for a very short period of time, typically less than a second, although in some circumstances they could be held for about 5 seconds at a guess. Write locks will mostly be held for a long time as they are mostly held by humans. There is a high probability of two users trying to edit the same aggregate at the same time, and a high probability of many users needing to temporarily read-lock at the same time too. I'm looking for suggestions as to how I might implement this. One additional point to make is that if I want to place a write lock and there are some read locks, I would like to "queue" the write lock so that no new read locks are placed. If the read locks are removed withing X seconds then the write lock is obtained, if not then the write lock backs off; no new read-locks would be placed while a write lock is queued. So far I have this idea The Lock object will have a version number (int) so I can detect multi-update conflicts, reload, try again. It will have a string[] for read locks A string to hold the session ID that has a write lock A string to hold the queued write lock Possibly a recursion counter to allow the same session to lock multiple times (for both read and write locks), but not sure about this yet. Rules: Can't place a read lock if there is a write lock or queued write lock. Can't place a write lock if there is a write lock or queued write lock. If there are no locks at all then a write lock may be placed. If there are read locks then a write lock will be queued instead of a full write lock placed. (If after X time the read locks are not gone the lock backs off, otherwise it is upgraded). Can't queue a write lock for a session that has a read lock. Can anyone see any problems? Suggest alternatives? Anything? I'd appreciate feedback before deciding on what approach to take.

    Read the article

  • Algorithm on trajectory analysis.

    - by Arman
    Hello, I would like to analyse the trajectory data based on given templates. I need to stack the similar trajectories together. The data is a set of coordinates xy,xy,xy and the templates are again the lines defined by the set of control points. I don't know to what direction to go, maybe to Neural Networks or pattern recognition? Could you please advace me page, book or library to start with? kind regards Arman. PS. Is it the right place to ask the question?

    Read the article

  • Converting old WCF service to RIA one

    - by Artur
    Hi there, Currently I have a service that looks like: Some app <-- WCF Service <-- Business Logic <-- Entity Framework Model <-- SQL Database One of "some app" would be Silverlight, but there as well will be lots of other clients (mainly mobile devices). To me the greatest benefit of having RIA services is possibility of making ordinary (not asynchronous) calls from Silverlight. I wondered if there is an easy way of converting what I have so far to be a RIA service? I as well wonder if there is a point of doing so if I plan to use the same service for multiple platforms/clients? Any help/links would be greatly appreciated.

    Read the article

  • should this database table be normalized?

    - by oo
    i have taken over a database that stores fitness information and we were having a debate about a certain table and whether it should stay as one table or get broken up into three tables. Today, there is one table called: workouts that has the following fields id, exercise_id, reps, weight, date, person_id So if i did 2 sets of 3 different exercises on one day, i would have 6 records in that table for that day. for example: id, exercise_id, reps, weight, date, person_id 1, 1, 10, 100, 1/1/2010, 10 2, 1, 10, 100, 1/1/2010, 10 3, 1, 10, 100, 1/1/2010, 10 4, 2, 10, 100, 1/1/2010, 10 5, 2, 10, 100, 1/1/2010, 10 6, 2, 10, 100, 1/1/2010, 10 So the question is, given that there is some redundant data (date, personid, exercise_id) in multiple records, should this be normalized to three tables WorkoutSummary: - id - date - person_id WorkoutExercise: - id - workout_id (foreign key into WorkoutSummary) - exercise_id WorkoutSets: - id - workout_exercise_id (foreign key into WorkoutExercise) - reps - weight I would guess the downside is that the queries would be slower after this refactoring as now we would need to join 3 tables to do the same query that had no joins before. The benefit of the refactoring allows up in the future to add new fields at the workout summary level or the exercise level with out adding in more duplication. any feedback on this debate?

    Read the article

  • When is lazy evaluation not useful?

    - by Cherian
    Delay execution is almost always a boon. But then there are cases when it’s a problem and you resort to “fetch” (in Nhibernate) to eager fetch it. Do you know practical situations when lazy evaluation can bite you back…?

    Read the article

  • How to do inclusive range queries when only half-open range is supported (ala SortedMap.subMap)

    - by polygenelubricants
    On SortedMap.subMap This is the API for SortedMap<K,V>.subMap: SortedMap<K,V> subMap(K fromKey, K toKey) : Returns a view of the portion of this map whose keys range from fromKey, inclusive, to toKey, exclusive. This inclusive lower bound, exclusive upper bound combo ("half-open range") is something that is prevalent in Java, and while it does have its benefits, it also has its quirks, as we shall soon see. The following snippet illustrates a simple usage of subMap: static <K,V> SortedMap<K,V> someSortOfSortedMap() { return Collections.synchronizedSortedMap(new TreeMap<K,V>()); } //... SortedMap<Integer,String> map = someSortOfSortedMap(); map.put(1, "One"); map.put(3, "Three"); map.put(5, "Five"); map.put(7, "Seven"); map.put(9, "Nine"); System.out.println(map.subMap(0, 4)); // prints "{1=One, 3=Three}" System.out.println(map.subMap(3, 7)); // prints "{3=Three, 5=Five}" The last line is important: 7=Seven is excluded, due to the exclusive upper bound nature of subMap. Now suppose that we actually need an inclusive upper bound, then we could try to write a utility method like this: static <V> SortedMap<Integer,V> subMapInclusive(SortedMap<Integer,V> map, int from, int to) { return (to == Integer.MAX_VALUE) ? map.tailMap(from) : map.subMap(from, to + 1); } Then, continuing on with the above snippet, we get: System.out.println(subMapInclusive(map, 3, 7)); // prints "{3=Three, 5=Five, 7=Seven}" map.put(Integer.MAX_VALUE, "Infinity"); System.out.println(subMapInclusive(map, 5, Integer.MAX_VALUE)); // {5=Five, 7=Seven, 9=Nine, 2147483647=Infinity} A couple of key observations need to be made: The good news is that we don't care about the type of the values, but... subMapInclusive assumes Integer keys for to + 1 to work. A generic version that also takes e.g. Long keys is not possible (see related questions) Not to mention that for Long, we need to compare against Long.MAX_VALUE instead Overloads for the numeric primitive boxed types Byte, Character, etc, as keys, must all be written individually A special check need to be made for toInclusive == Integer.MAX_VALUE, because +1 would overflow, and subMap would throw IllegalArgumentException: fromKey > toKey This, generally speaking, is an overly ugly and overly specific solution What about String keys? Or some unknown type that may not even be Comparable<?>? So the question is: is it possible to write a general subMapInclusive method that takes a SortedMap<K,V>, and K fromKey, K toKey, and perform an inclusive-range subMap queries? Related questions Are upper bounds of indexed ranges always assumed to be exclusive? Is it possible to write a generic +1 method for numeric box types in Java? On NavigableMap It should be mentioned that there's a NavigableMap.subMap overload that takes two additional boolean variables to signify whether the bounds are inclusive or exclusive. Had this been made available in SortedMap, then none of the above would've even been asked. So working with a NavigableMap<K,V> for inclusive range queries would've been ideal, but while Collections provides utility methods for SortedMap (among other things), we aren't afforded the same luxury with NavigableMap. Related questions Writing a synchronized thread-safety wrapper for NavigableMap On API providing only exclusive upper bound range queries Does this highlight a problem with exclusive upper bound range queries? How were inclusive range queries done in the past when exclusive upper bound is the only available functionality?

    Read the article

  • Managing Cisco programatically; Telnet vs SNMP?

    - by MikeHerrera
    I was recently approached by a network-engineer, co-worker who would like to offload his minor network admin duties to a junior-level helpdesk tech. The specific location in need of management acts as an ISP for tenants on its single-site property, so there's a lot of small adjustments being made on a daily basis. I am thinking it would be helpful to write him a winform app to manage the 32 Cisco devices, on-site. I'd like to initially provide functionality which could modify access control lists, port VLAN assignments, and bandwidth limitations per VLAN... adding more to the list as its deemed valuable. My initial thought was to emulate a telnet session with the network device; utilizing my network-engineer's familiarity with the command-line / IOS interaction. Minimal time would be required to learn Cisco IOS conventions, myself. Though while searching for solutions, it appears that most people favor SNMP. That, or, their specific circumstances pushed them in the direction of SNMP. I wanted to know if I've overlooked an obvious benefit of SNMP. Should I be using SNMP? Why or why not?

    Read the article

  • Validation without ServiceLocator

    - by Dmitriy Nagirnyak
    Hi, I am getting back again and again to it thinking about the best way to perform validation on POCO objects that need access to some context (ISession in NH, IRepository for example). The only option I still can see is to use S*ervice Locator*, so my validation would look like: public User : ICanValidate { public User() {} // We need this constructor (so no context known) public virtual string Username { get; set; } public IEnumerable<ValidationError> Validate() { if (ServiceLocator.GetService<IUserRepository>().FindUserByUsername(Username) != null) yield return new ValidationError("Username", "User already exists.") } } I already use Inversion Of control and Dependency Injection and really don't like the ServiceLocator due to number of facts: Harder to maintain implicit dependencies. Harder to test the code. Potential threading issues. Explicit dependency only on the ServiceLocator. The code becomes harder to understand. Need to register the ServiceLocator interfaces during the testing. But on the other side, with plain POCO objects, I do not see any other way of performing the validation like above without ServiceLocator and only using IoC/DI. So the question would be: is there any way to use DI/IoC for the situation described above? Thanks, Dmitriy.

    Read the article

  • Optimize mysql table ?

    - by fabien-barbier
    Here is my actual table schema (I'm using Mysql) : Table experiment : code(int) sample_1_id sample_2_id ... until ... sample_12_id rna_1_id rna_2_id ... until ... rna_12_id experiment_start How can I optimize both part : sample_n_id and rna_n_id (all are bigint(20) and allow null=true) ? About values : we can have : ex : sample_1_id = 2 , Sample_2_id = 5 , ... Note : values can be updated. Ideas ? Thanks.

    Read the article

< Previous Page | 206 207 208 209 210 211 212 213 214 215 216 217  | Next Page >