Search Results

Search found 31038 results on 1242 pages for 'michael best'.

Page 201/1242 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • Best canvas for drawing in wxPython?

    - by Pablo Rodriguez
    I have to draw a graph of elements composing a topological model of a physical network. There would be nodes and arches, and the latter could be unidirectional or bidirectional. I would like to capture the clicking events for the nodes and the arches (to select the element and show its properties somewhere), and the dragging events for the nodes (to move them around) and arches (to connect or disconnect elements). I've done some research and I've narrowed the alternatives down to OGL (Object Graphics Library) and FloatCanvas. I would not like to go down to the DrawingContext, but it is not discarded if necessary. Which canvas option would you choose?

    Read the article

  • Best wrapper for simultaneous API requests?

    - by bluebit
    I am looking for the easiest, simplest way to access web APIs that return either JSON or XML, with concurrent requests. For example, I would like to call the twitter search API and return 5 pages of results at the same time (5 requests). The results should ideally be integrated and returned in one array of hashes. I have about 15 APIs that I will be using, and already have code to access them individually (using simple a NET HTTP request) and parse them, but I need to make these requests concurrent in the easiest way possible. Additionally, any error handling for JSON/XML parsing is a bonus.

    Read the article

  • choose the best class if 2 class have same P (c|d), naive bayes

    - by ryandi
    Hello I have some question about naive bayes classifier . In my project I have to classify a text into a class from 4 available class. In naive bayes we have formula like cmap=argmax.P(d|c).P(c) I have standarize the amount of training document of each class, so I got a same P(c) value for each class (0.25). Here's my question: What if a testing document token doesn't have any token which belong to any of those 4 class(in document training)? Resulted to all of the class have same value of P(d|c).P(c). Which class should i pick? What if the token exist, and 2 class or more have same value of P(d|c).P(c) what should I do? Thank you..

    Read the article

  • Best datastructure for frequently queried list of objects

    - by panzerschreck
    Hello, I have a list of objects say, List. The Entity class has an equals method,on few attributes ( business rule ) to differentiate one Entity object from the other. The task that we usually carry out on this list is to remove all the duplicates something like this : List<Entity> noDuplicates = new ArrayList<Entity>(); for(Entity entity: lstEntities) { int indexOf = noDuplicates.indexOf(entity); if(indexOf >= 0 ) { noDuplicates.get(indexOf).merge(entity); } else { noDuplicates.add(entity); } } Now, the problem that I have been observing is that this part of the code, is slowing down considerably as soon as the list has objects more than 10000.I understand arraylist is doing a o(N) search. Is there a faster alternative, using HashMap is not an option, because the entity's uniqueness is built upon 4 of its attributes together, it would be tedious to put in the key itself into the map ? will sorted set help in faster querying ? Thanks

    Read the article

  • what is the best way of giving the feedback to the user

    - by Nubkadiya
    im using speech recognition by pressing a button in my application. i want to show the users that when they click the button they should speech. i was thinking about using a progress bar. but i dont think its a good idea. then i thought about putting a label saying whats going on. can someone suggest any more options. please

    Read the article

  • Best way to implement a List(Of) with a maximum number of items

    - by Ben
    I'm trying to figure out a good way of implementing a List(Of) that holds a maximum number of records. e.g. I have a List(Of Int32) - it's being populated every 2 seconds with a new Int32 item. I want to store only the most current 2000 items. How can I make the list hold a maximum of 2000 items, then when the 2001'th item is attempted to be added, the List drops the 2000'th item (resulting in the current total being 1999). Thing is, I need to make sure I'm dropping only the oldest item and adding a new item into the List. Ben

    Read the article

  • Best means to store data locally when offline

    - by mickartz
    I am in the midst of writing a small program (more to experiment with vs 2010 than anything else) Despite being an experiment it has some practical use for our local athletics club. My thought was to access the DB (currently online) to download the current members and store locally on a laptop (this is a MS sql table, used to power the club's website). take the laptop to the event (yes there ARE places that don't have internet coverage), add members to that days race (also a row from a sql table (though no changes would be made to this), record results (new records in 3rd table) Once home, showered and within internet access again, upload/edit the tables as per the race results/member changes etc. So I was thinking i'd do something like write xml files locally with the data, including a field to indicate changes etc? If anyone can point me in a direction i would appreciate it...hell if anyone could tell me if this has a name, I'd appreciate it.

    Read the article

  • What is the best DBMS for the job?

    - by Evernoob
    Just had a discussion at work about the merits of using PostgreSQL over MySQL and vice-versa. Does anyone have any practical experience where there is a valid reason to use one over the other? Some people were saying that Postgre is better for security purposes whereas MySQL is becoming more feature rich... I'm not sure what to make of it.

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • Best way to associate data files with particular tests in RSpec / Ruby

    - by Bill T
    For my RSpec tests I would to automatically associate data files with each test. To clarify, if my tests each require an xml file as input data and then some xpath statements to validate the responses they get back I would like to externalize the xml and xpath as files and have the testing framework easily associate them with the particular test being run by using the unique ID of the test as the file(s) name. I tried to get this behavior but the solution isn't very clean. I wrote a helper method that takes the value of "description" and combines it with FILE to create a unique identifier which is set into a global variable that other utilities can access. The unique identifier is used to associate the data files I need. I have to call this helper method as the first line of every test, which is ugly. If I have an RSpec example that looks like this: describe "Basic functions of this server I'm testing" do it "should give me back a response" do # Sets a global var to: "my_tests_spec.rb_should_give_me_back_a_response" TestHelper::who_am_i __FILE__, description ... end end Is there some better/cleaner/slicker way I can get an unique ID for each test that I could use to associate data files with? Perhaps something build into RSpec I'm unaware of? Thank you, -Bill

    Read the article

  • Best indexing strategy for several varchar columns in Postgres

    - by Corey
    I have a table with 10 columns that need to be searchable (the table itself has about 20 columns). So the user will enter query criteria for at least one of the columns but possibly all ten. All non-empty criteria is then put into an AND condition Suppose the user provided non-empty criteria for column1 and column4 and column8 the query would be: select * from the_table where column1 like '%column1_query%' and column4 like '%column4_query%' and column8 like '%column8_query%' So my question is: am I better off creating 1 index with 10 columns? 10 indexes with 1 column each? Or do I need to find out what sets of columns are queried together frequently and create indexes for them (an index on cols 1,4 and 8 in the case above). If my understanding is correct a single index of 10 columns would only work effectively if all 10 columns are in the condition. Open to any suggestions here, additionally the rowcount of the table is only expected to be around 20-30K rows but I want to make sure any and all searches on the table are fast. Thanks!

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • Excel - Best Way to Connect With Access Data

    - by gamerzfuse
    Hello there, Here is the situation we have: a) I have an Access database / application that records a significant amount of data. Significant fields would be hours, # of sales, # of unreturned calls, etc b) I have an Excel document that connects to the Access database and pulls data in to visualize it As it stands now, the Excel file has a Refresh button that loads new data. The data is loaded into a large PivotTable. The main 'visual form' then uses VLOOKUP to get the results from the form, based on the related hours. This operation is slow (~10 seconds) and seems to be redundant and inefficient. Is there a better way to do this? I am willing to go just about any route - just need directions. Thanks in advance! Update: I have confirmed (due to helpful comments/responses) that the problem is with the data loading itself. removing all the VLOOKUPs only took a second or two out of the load time. So, the questions stands as how I can rapidly and reliably get the data without so much time involvement (it loads around 3000 records into the PivotTables).

    Read the article

  • Best Tools for Software Maintenance Engineering

    - by Pev
    Yes, the dreaded 'M' word. You've got a workstation, source control and half a million lines of source code that you didn't write. The documentation was out of date the moment that it was approved and published. The original developers are LTAO, at the next project/startup/loony bin and not answering email. What are you going to do? {favourite editor} and Grep will get you started on your spelunking through the gnarling guts of the code base but what other tools should be in the maintenance engineers toolbox? To start the ball-rolling; I don't think I could live without source-insight for C/C++ spelunking. (DISCLAIMER: I don't work for 'em).

    Read the article

  • Best way to correct garbled data caused by false encoding

    - by ercan
    Hi all, I have a set of data that contains garbled text fields because of encoding errors during many import/exports from one database to another. Most of the errors were caused by converting UTF-8 to ISO-8859-1. Strangely enough, the errors are not consistent: the word 'München' appears as 'München' in some place and as 'MÜnchen'. Is there a trick in SQL server to correct this kind of crap? The first thing that I can think of is to exploit the COLLATE clause, so that ü is interpreted as ü, but I don't exactly know how. If it isn't possible to make it in the DB level, do you know any tool that helps for a bulk correction? (no manual find/replace tool, but a tool that guesses the garbled text somehow and correct them)

    Read the article

  • python list/dict property best practice

    - by jterrace
    I have a class object that stores some properties that are lists of other objects. Each of the items in the list has an identifier that can be accessed with the id property. I'd like to be able to read and write from these lists but also be able to access a dictionary keyed by their identifier. Let me illustrate with an example: class Child(object): def __init__(self, id, name): self.id = id self.name = name class Teacher(object): def __init__(self, id, name): self.id = id self.name = name class Classroom(object): def __init__(self, children, teachers): self.children = children self.teachers = teachers classroom = Classroom([Child('389','pete')], [Teacher('829','bob')]) This is a silly example, but it illustrates what I'm trying to do. I'd like to be able to interact with the classroom object like this: #access like a list print classroom.children[0] #append like it's a list classroom.children.append(Child('2344','joe')) #delete from like it's a list classroom.children.pop(0) But I'd also like to be able to access it like it's a dictionary, and the dictionary should be automatically updated when I modify the list: #access like a dict print classroom.childrenById['389'] I realize I could just make it a dict, but I want to avoid code like this: classroom.childrendict[child.id] = child I also might have several of these properties, so I don't want to add functions like addChild, which feels very un-pythonic anyway. Is there a way to somehow subclass dict and/or list and provide all of these functions easily with my class's properties? I'd also like to avoid as much code as possible.

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • What is the best way to embed controls in a list/grid

    - by Brad
    I have a table of about 450 rows that I would like to display in a graphical list for users to view or modify the line items. The users would be selection options from comboboxes and selecting check boxes etc. I have found a listview class that extends the basic listview to allow for embeding objects but it seems kind of slugish when I load all the rows into it. I have used a datagridview in the past for comboboxes and checkboxes, but there was a lot of time invested in getting that up and running...not a big fav of mine. I am looking for sugestions how I can do this with minimal overhead. thanks c#, vs2008, .net 2.0, system.windows.forms

    Read the article

  • Best approach for using Scanner Objects in Java?

    - by devjeetroy
    Although I'm more of a C++/ASM guy, I have to work with java as a part of my undergrad course at college. Our teacher taught us input using Scanner(System.in), and told us that if multiple functions are were taking user input, it would be advisable that a single Scanner object is passed around so as to reduce chances of the input stream getting screwed up. Now using this approach has gotten me into a situation where I'm trying to use a Scanner.nextLine(), and this statement does not wait for user input. It just moves on to the following statement. I figured there may be some residual cr/lf or other characters in the Scanner that might not have been retrieved are causing the problem. Here is the code. while(lineScanner.hasNext()) { if(isPlaceHolder(temp = lineScanner.next())) { temp = temp.replace("<",""); temp = temp.replace(">", ""); System.out.print("Enter "+aOrA(temp.charAt(0)) +" " +temp + " : "); temp = consoleInput.nextLine(); } outputFileStream.print(temp + " "); } All of the code is inside a function which receives a Scanner object consoleInput. Ok, so what happens when i run it is that when the program enters the if() the first time, It carries out theSystem.out.print, does not wait for user input, and moves on to the second time that it enters the 'if' block. This time, it takes the input and the rest of the program operates normally. What is even more surprising is that when i check the output file created by the program, it is perfect, just as i want to be. Almost as if the first time input using the scanner is correct. I have solved this problem by creating a new system.in Scanner in the function itself, instead of receiving the Scanner object as a parameter. But I am still very curious to know what the hell is happening and why it couldn't be solved using a simple Scanner.reset(). Would it be better to just simply create a Scanner Object for each function? Thanks, Devjeet PS. Although I know how to take input using fileinputstreams and the like, we are not supposed to use it with the homework.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >