Search Results

Search found 766 results on 31 pages for 'simplicity'.

Page 19/31 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • Comparing two collections for equality

    - by Crossbrowser
    I would like to compare two collections (in C#), but I'm not sure of the best way to implement this efficiently. I've read the other thread about Enumerable.SequenceEqual, but it's not exactly what I'm looking for. In my case, two collections would be equal if they both contain the same items (no matter the order). Example: collection1 = {1, 2, 3, 4}; collection2 = {2, 4, 1, 3}; collection1 == collection2; // true What I usually do is to loop through each item of one collection and see if it exists in the other collection, then loop through each item of the other collection and see if it exists in the first collection. (I start by comparing the lengths). if (collection1.Count != collection2.Count) return false; // the collections are not equal foreach (Item item in collection1) { if (!collection2.Contains(item)) return false; // the collections are not equal } foreach (Item item in collection2) { if (!collection1.Contains(item)) return false; // the collections are not equal } return true; // the collections are equal However, this is not entirely correct, and it's probably not the most efficient way to do compare two collections for equality. An example I can think of that would be wrong is: collection1 = {1, 2, 3, 3, 4} collection2 = {1, 2, 2, 3, 4} Which would be equal with my implementation. Should I just count the number of times each item is found and make sure the counts are equal in both collections? The examples are in some sort of C# (let's call it pseudo-C#), but give your answer in whatever language you wish, it does not matter. Note: I used integers in the examples for simplicity, but I want to be able to use reference-type objects too (they do not behave correctly as keys because only the reference of the object is compared, not the content).

    Read the article

  • Finding values from a table that are *not* in a grouping of another table and what group that value

    - by Bkins
    I hope I am not missing something very simple here. I have done a Google search(es) and searched through stackoverflow. Here is the situation: For simplicity's sake let's say I have a table called "PeoplesDocs", in a SQL Server 2008 DB, that holds a bunch of people and all the documents that they own. So one person can have several documents. I also have a table called "RequiredDocs" that simply holds all the documents that a person should have. Here is sort of what it looks like: PeoplesDocs: PersonID DocID -------- ----- 1 A 1 B 1 C 1 D 2 C 2 D 3 A 3 B 3 C RequiredDocs: DocID DocName ----- --------- A DocumentA B DocumentB C DocumentC D DocumentD How do I write a SQL query that returns some variation of: PersonID MissingDocs -------- ----------- 2 DocumentA 2 DocumentB 3 DocumentD I have tried, and most of my searching has pointed to, something like: SELECT DocID FROM DocsRequired WHERE NOT EXIST IN ( SELECT DocID FROM PeoplesDocs) but obviously this will not return anything in this example because everyone has at least one of the documents. Also, if a person does not have any documents then there will be one record in the PeoplesDocs table with the DocID set to NULL. Thank you in advance for any help you can provide, Ben

    Read the article

  • In ParallelPython, a method of an object ( object.func() ) fails to manipulate a variable of an object ( object.value )

    - by mehmet.ali.anil
    With parallelpython, I am trying to convert my old serial code to parallel, which heavily relies on objects that have methods that change that object's variables. A stripped example in which I omit the syntax in favor of simplicity: class Network: self.adjacency_matrix = [ ... ] self.state = [ ... ] self.equilibria = [ ... ] ... def populate_equilibria(self): # this function takes every possible value that self.state can be in # runs the boolean dynamical system # and writes an integer within self.equilibria for each self.state # doesn't return anything I call this method as: Code: j1 = jobserver.submit(net2.populate_equilibria,(),(),("numpy as num")) The job is sumbitted, and I know that a long computation takes place, so I speculate that my code is ran. The problem is, i am new to parallelpython , I was expecting that, when the method is called, the variable net2.equilibria would be written accordingly, and I would get a revised object (net2) . That is how my code works, independent objects with methods that act upon the object's variables. Rather, though the computation is apparent, and reasonably timed, the variable net2.equilibria remains unchanged. As if PP only takes the function and the object, computes it elsewhere, but never returns the object, so I am left with the old one. What do I miss? Thanks in advance.

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • C# .NET : Is using the .NET Image Conversion enough?

    - by contactmatt
    I've seen a lot of people try to code their own image conversion techniques. It often seems to be very complicated, and ends up using GDI+ funciton calls, and manipulating bits of the image. This has got me wondering if I am missing something in the simplicity of .NET's image conversion call when saving an image. Here's the code I have Bitmap tempBmp = new Bitmap("c:\temp\img.jpg"); Bitmap bmp = new Bitmap(tempBmp, 800, 600); bmp.Save(c:\temp\img.bmp, //extension depends on format ImageFormat.Bmp) //These are all the ImageFormats I allow conversion to within the program. Ignore the syntax for a second ;) ImageFormat.Gif) //or ImageFormat.Jpeg) //or ImageFormat.Png) //or ImageFormat.Tiff) //or ImageFormat.Wmf) //or ImageFormat.Bmp)//or ); This is all I'm doing in my image conversion. Just setting the location of where the image should be saved, and passing it an ImageFormat type. I've tested it the best I can, but I'm wondering if I am missing anything in this simple format conversion, or if this is suffice?

    Read the article

  • List of objects or parallel arrays of properties?

    - by Headcrab
    The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension?

    Read the article

  • Submit WordPress form password programmatically

    - by songdogtech
    How can I let a user access a WordPress protected page with a URL that will submit the password in the form below? I want to be able to let a user get to a password protected WordPress page without needing to type the password, so when they go to the page, the password is submitted by a POST URL on page load. This not intended to be secure in any respect; I'll need to hardcode the password in the URL and the PHP. It's just for simplicity for the user, and once they're in, the cookie will let them in for 10 more days. I will select the particular user with separate PHP function that determines their IP or WordPress login status. I used Wireshark to find the POST string: post_password=mypassword&Submit=Submit but using this URL mydomain.com/wp-pass.php?post_password=mypassword&Submit=Submit gives me a blank page. This is the form: <form action="http://mydomain.com/wp-pass.php" method="post"> Password: <input name="post_password" type="password" size="20" /> <input type="submit" name="Submit" value="Submit" /></form> This is wp-pass.php: <?php require( dirname(__FILE__) . '/wp-load.php'); if ( get_magic_quotes_gpc() ) $_POST['post_password'] = stripslashes($_POST['post_password']); setcookie('wp-postpass_' . COOKIEHASH, $_POST['post_password'], time() + 864000, COOKIEPATH); wp_safe_redirect(wp_get_referer()); ?> What am I doing wrong? Or is there a better way to let a user into a password protected page automatically?

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Best practices for Java logging from multiple threads?

    - by Jason S
    I want to have a diagnostic log that is produced by several tasks managing data. These tasks may be in multiple threads. Each task needs to write an element (possibly with subelements) to the log; get in and get out quickly. If this were a single-task situation I'd use XMLStreamWriter as it seems like the best match for simplicity/functionality without having to hold a ballooning XML document in memory. But it's not a single-task situation, and I'm not sure how to best make sure this is "threadsafe", where "threadsafe" in this application means that each log element should be written to the log correctly and serially (one after the other and not interleaved in any way). Any suggestions? I have a vague intuition that the way to go is to use a queue of log elements (with each one able to be produced quickly: my application is busy doing real work that's performance-sensitive), and have a separate thread which handles the log elements and sends them to a file so the logging doesn't interrupt the producers. The logging doesn't necessarily have to be XML, but I do want it to be structured and machine-readable. edit: I put "threadsafe" in quotes. Log4j seems to be the obvious choice (new to me but old to the community), why reinvent the wheel...

    Read the article

  • MS Query Analyzer / Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analyzer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • Embedded non-relational (nosql) data store

    - by Igor Brejc
    I'm thinking about using/implementing some kind of an embedded key-value (or document) store for my Windows desktop application. I want to be able to store various types of data (GPS tracks would be one example) and of course be able to query this data. The amount of data would be such that it couldn't all be loaded into memory at the same time. I'm thinking about using sqlite as a storage engine for a key-value store, something like y-serial, but written in .NET. I've also read about FriendFeed's usage of MySQL to store schema-less data, which is a good pointer on how to use RDBMS for non-relational data. sqlite seems to be a good option because of its simplicity, portability and library size. My question is whether there are any other options for an embedded non-relational store? It doesn't need to be distributable and it doesn't have to support transactions, but it does have to be accessible from .NET and it should have a small download size. UPDATE: I've found an article titled SQLite as a Key-Value Database which compares sqlite with Berkeley DB, which is an embedded key-value store library.

    Read the article

  • Standardizing a Release/Tools group on a specific language

    - by grahzny
    I'm part of a six-member build and release team for an embedded software company. We also support a lot of developer tools, such as Atlassian's Fisheye, Jira, etc., Perforce, Bugzilla, AnthillPro, and a couple of homebrew tools (like my Django release notes generator). Most of the time, our team just writes little plugins for larger apps (ex: customize workflows in Anthill), long-term utility scripts (package up a release for QA), or things like Perforce triggers (don't let people check into a specific branch unless their change description includes a bug number; authenticate against Active Directory instead of Perforce's internal passwords). That's about the scale of our problems, although we sometimes tackle something slightly more sizable. My boss, who is reasonably technical, has asked us to standardize on one or two languages so we can more easily substitute for each other. He's advocating bash scripts and Perl, due to their universality and simplicity. I can see his point--we mostly do "glue", so why not use "glue" languages rather than saddle ourselves with something designed for much larger projects? Since some of the tools we work with are Java-based, we do need to use something that speaks JVM sometimes. (The path of least resistance for these projects is BeanShell and Groovy.) I feel a tremendous itch toward language advocacy, but I'm trying to avoid saying "We should use Python 'cause I like it and Perl is gross." Instead, I'm trying to come up with a good approach to defining our problem set: what problems do we solve with scripts? Would we benefit from a library of common functions by our team, or are most of our projects more isolated? What is it reasonable to expect my co-workers to learn? What languages give us the most ease of development and ease of modification? Can you folks suggest some useful ways to approach this problem, both for my own thinking process and to help me facilitate some brainstorming among my coworkers?

    Read the article

  • Word VBA - Find text between delimiters and convert to lower case

    - by jJack
    I would like to find text which is between the < and characters, and then turn any found text into "normal" case, where first letter of word is capitalized. Here is what I have thus far: Function findTextBetweenCarots() As String Dim strText As String With Selection .Find.Text = "<" ' what about <[^0-9]+> ? .Find.Forward = True .Find.Wrap = wdFindContinue End With Selection.Find.Execute ' Application.Selection. ' how do I get the text between the other ">"? findCarotSymb = Application.Selection.Text End Function Or, is there a better way of doing this? I also approached the problem using the VBScript Regex 5.5 library, which worked on simple documents, but not on certain documents with complex tables. For example, trying to just bold the text (for simplicity): Sub BoldUpperCaseWords() Dim regEx, Match, Matches Dim rngRange As Range Set regEx = New RegExp regEx.Pattern = "<[^0-9]+>" regEx.IgnoreCase = False regEx.Global = True Set Matches = regEx.Execute(ActiveDocument.Range.Text) For Each Match In Matches ActiveDocument.Range(Match.FirstIndex, Match.FirstIndex + Len(Match.Value)).Bold = True Next End Sub would not work in a document with tables. In fact, it would not even bold the correct text (the text between the <. This leads me to believe I have a broader issue here that I am missing. Here is what a sample doc looks like. Notice the wrong text is bold:

    Read the article

  • Submit WordPress form programmatically

    - by songdogtech
    How can I let a user access a WordPress protected page with a URL that will submit the password in the form below? I want to be able to let a user get to a password protected WordPress page without needing to type the password, so when they go to the page, the password is submitted by a POST URL on page load. This not intended to be secure in any respect; I'll need to hardcode the password in the URL and the PHP. It's just for simplicity for the user, and once they're in, the cookie will let them in for 10 more days. I will select the particular user with separate PHP function that determines their IP or WordPress login status. I used Wireshark to find the POST string: post_password=mypassword&Submit=Submit but using this URL mydomain.com/wp-pass.php?post_password=mypassword&Submit=Submit gives me a blank page. This is the form: <form action="http://mydomain.com/wp-pass.php" method="post"> Password: <input name="post_password" type="password" size="20" /> <input type="submit" name="Submit" value="Submit" /></form> This is wp-pass.php: <?php require( dirname(__FILE__) . '/wp-load.php'); if ( get_magic_quotes_gpc() ) $_POST['post_password'] = stripslashes($_POST['post_password']); setcookie('wp-postpass_' . COOKIEHASH, $_POST['post_password'], time() + 864000, COOKIEPATH); wp_safe_redirect(wp_get_referer()); ?> What am I doing wrong? Or is there a better way to let a user into a password protected page automatically?

    Read the article

  • Can ElementTree be told to preserve the order of attributes?

    - by dmckee
    I've written a fairly simple filter in python using ElementTree to munge the contexts of some xml files. And it works, more or less. But it reorders the attributes of various tags, and I'd like it to not do that. Does anyone know a switch I can throw to make it keep them in specified order? Context for this I'm working with and on a particle physics tool that has a complex, but oddly limited configuration system based on xml files. Among the many things setup that way are the paths to various static data files. These paths are hardcoded into the existing xml and there are no facilities for setting or varying them based on environment variables, and in our local installation they are necessarily in a different place. This isn't a disaster because the combined source- and build-control tool we're using allows us to shadow certain files with local copies. But even thought the data fields are static the xml isn't, so I've written a script for fixing the paths, but with the attribute rearrangement diffs between the local and master versions are harder to read than necessary. This is my first time taking ElementTree for a spin (and only my fifth or sixth python project) so maybe I'm just doing it wrong. Abstracted for simplicity the code looks like this: tree = elementtree.ElementTree.parse(inputfile) i = tree.getiterator() for e in i: e.text = filter(e.text) tree.write(outputfile) Reasonable or dumb? Related links: How can I get the order of an element attribute list using Python xml.sax? Preserve order of attributes when modifying with minidom

    Read the article

  • Valueurl Binding On Large Arrays Causes Sluggish User Interface

    - by Hooligancat
    I have a large data set (some 3500 objects) that returns from a remote server via HTTP. Currently the data is being presented in an NSCollectionView. One aspect of the data is a path pack to the server for a small image that represents the data (think thumbnail for simplicity). Bindings works fantastically for the data that is already returned, and binding the image via a valueurl binding is easy to do. However, the user interface is very sluggish when scrolling through the data set - which makes me think that the NSCollectionView is retrieving all the image data instead of just the image data used to display the currently viewable images. I was under the impression that Cocoa controls were smart enough to only retrieve data for the information that is actually being output to the user interface through lazy loading. This certainly seems to be the case with NSTableView - but I could be misguided on this thought. Should valueurl binding act lazily and, moreover, should it act lazily in an NSCollectionView? I could create a caching mechanism (in fact I already have such a thing in place for another application - see my post here if you are interested http://stackoverflow.com/questions/1740209/populating-nsimage-with-data-from-an-asynchronous-nsurlconnection) but I really don't want to go this route if I don't have to for this specific implementation as the user could potentially change data sets often and may only want small sub-sets of the data. Any suggested approaches? Thanks!

    Read the article

  • Why delete-orphan needs "cascade all" to run in JPA/Hibernate ?

    - by Jerome C.
    Hello, I try to map a one-to-many relation with cascade "remove" (jpa) and "delete-orphan", because I don't want children to be saved or persist when the parent is saved or persist (security reasons due to client to server (GWT, Gilead)) But this configuration doesn't work. When I try with cascade "all", it runs. Why the delete-orphan option needs a cascade "all" to run ? here is the code (without id or other fields for simplicity, the class Thread defines a simple many-to-one property without cascade): when using the removeThread function in a transactional function, it does not run but if I edit cascade.Remove into cascade.All, it runs. @Entity public class Forum { private List<ForumThread> threads; /** * @return the topics */ @OneToMany(mappedBy = "parent", cascade = CascadeType.REMOVE, fetch = FetchType.LAZY) @Cascade(org.hibernate.annotations.CascadeType.DELETE_ORPHAN) public List<ForumThread> getThreads() { return threads; } /** * @param topics the topics to set */ public void setThreads(List<ForumThread> threads) { this.threads = threads; } public void addThread(ForumThread thread) { getThreads().add(thread); thread.setParent(this); } public void removeThread(ForumThread thread) { getThreads().remove(thread); } } thanks.

    Read the article

  • Silverlight 3 data-binding child property doesn't update

    - by sonofpirate
    I have a Silverlight control that has my root ViewModel object as it's data source. The ViewModel exposes a list of Cards as well as a SelectedCard property which is bound to a drop-down list at the top of the view. I then have a form of sorts at the bottom that displays the properties of the SelectedCard. My XAML appears as (reduced for simplicity): <StackPanel Orientation="Vertical"> <ComboBox DisplayMemberPath="Name" ItemsSource="{Binding Path=Cards}" SelectedItem="{Binding Path=SelectedCard, Mode=TwoWay}" /> <TextBlock Text="{Binding Path=SelectedCard.Name}" /> <ListBox DisplayMemberPath="Name" ItemsSource="{Binding Path=SelectedCard.PendingTransactions}" /> </StackPanel> I would expect the TextBlock and ListBox to update whenever I select a new item in the ComboBox, but this is not the case. I'm sure it has to do with the fact that the TextBlock and ListBox are actually bound to properties of the SelectedCard so it is listening for property change notifications for the properties on that object. But, I would have thought that data-binding would be smart enough to recognize that the parent object in the binding expression had changed and update the entire binding. It bears noting that the PendingTransactions property (bound to the ListBox) is lazy-loaded. So, the first time I select an item in the ComboBox, I do make the async call and load the list and the UI updates to display the information corresponding to the selected item. However, when I reselect an item, the UI doesn't change! For example, if my original list contains three cards, I select the first card by default. Data-binding does attempt to access the PendingTransactions property on that Card object and updates the ListBox correctly. If I select the second card in the list, the same thing happens and I get the list of PendingTransactions for that card displayed. But, if I select the first card again, nothing changes in my UI! Setting a breakpoint, I am able to confirm that the SelectedCard property is being updated correctly. How can I make this work???

    Read the article

  • How do I Data-Bind dual DateTimePicker to a single DateTime object

    - by S.C. Madsen
    Hi, I have a simple form, with two DateTimePicker-controls: One for date, and one for time. The thing is these two controls are supposed to represent a single point in time. Hence I would like to "Bind" them to a single DateTime property on my form (for simplicity). I did the following: // Start is a DateTime property on the form _StartDate.DataBindings.Add("Value", this, "Start"); _StartTime.DataBindings.Add("Value", this, "Start"); But hooking into "ValueChanged" event, yields mixed results... Sometimes I get exactly what I want, sometimes the updates of the property are "sluggish". I figured this way of splitting into two DateTimePicker's was fairly common. So how to do it? Update: There is possibly multiple questions in there: How do I bind a DateTimePicker (Format: Date) to a DateTime Property on a form? Then, how do I bind yet another DateTimePicker (Format: Time) to the same property? I'm using Visual Studio Express 2008 (.NET 3.5), and I seemingly get ValueChanged events from the DateTimePickers before the value is changed?

    Read the article

  • MS Query Analizer/Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analizer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • PHP: What is an efficient way to parse a text file containing very long lines?

    - by Shaun
    I'm working on a parser in php which is designed to extract MySQL records out of a text file. A particular line might begin with a string corresponding to which table the records (rows) need to be inserted into, followed by the records themselves. The records are delimited by a backslash and the fields (columns) are separated by commas. For the sake of simplicity, let's assume that we have a table representing people in our database, with fields being First Name, Last Name, and Occupation. Thus, one line of the file might be as follows [People] = "\Han,Solo,Smuggler\Luke,Skywalker,Jedi..." Where the ellipses (...) could be additional people. One straightforward approach might be to use fgets() to extract a line from the file, and use preg_match() to extract the table name, records, and fields from that line. However, let's suppose that we have an awful lot of Star Wars characters to track. So many, in fact, that this line ends up being 200,000+ characters/bytes long. In such a case, taking the above approach to extract the database information seems a bit inefficient. You have to first read hundreds of thousands of characters into memory, then read back over those same characters to find regex matches. Is there a way, similar to the Java String next(String pattern) method of the Scanner class constructed using a file, that allows you to match patterns in-line while scanning through the file? The idea is that you don't have to scan through the same text twice (to read it from the file into a string, and then to match patterns) or store the text redundantly in memory (in both the file line string and the matched patterns). Would this even yield a significant increase in performance? It's hard to tell exactly what PHP or Java are doing behind the scenes.

    Read the article

  • Is it bad practice to have state in a static class?

    - by Matthew
    I would like to do something like this: public class Foo { // Probably really a Guid, but I'm using a string here for simplicity's sake. string Id { get; set; } int Data { get; set; } public Foo (int data) { ... } ... } public static class FooManager { Dictionary<string, Foo> foos = new Dictionary<string, Foo> (); public static Foo Get (string id) { return foos [id]; } public static Foo Add (int data) { Foo foo = new Foo (data); foos.Add (foo.Id, foo); return foo; } public static bool Remove (string id) { return foos.Remove (id); } ... // Other members, perhaps events for when Foos are added or removed, etc. } This would allow me to manage the global collection of Foos from anywhere. However, I've been told that static classes should always be stateless--you shouldn't use them to store global data. Global data in general seems to be frowned upon. If I shouldn't use a static class, what is the right way to approach this problem? Note: I did find a similar question, but the answer given doesn't really apply in my case.

    Read the article

  • Getting the name of a child class in the parent class (static context)

    - by Benoit Myard
    Hi everybody, I'm building an ORM library with reuse and simplicity in mind; everything goes fine except that I got stuck by a stupid inheritance limitation. Please consider the code below: class BaseModel { /* * Return an instance of a Model from the database. */ static public function get (/* varargs */) { // 1. Notice we want an instance of User $class = get_class(parent); // value: bool(false) $class = get_class(self); // value: bool(false) $class = get_class(); // value: string(9) "BaseModel" $class = __CLASS__; // value: string(9) "BaseModel" // 2. Query the database with id $row = get_row_from_db_as_array(func_get_args()); // 3. Return the filled instance $obj = new $class(); $obj->data = $row; return $obj; } } class User extends BaseModel { protected $table = 'users'; protected $fields = array('id', 'name'); protected $primary_keys = array('id'); } class Section extends BaseModel { // [...] } $my_user = User::get(3); $my_user->name = 'Jean'; $other_user = User::get(24); $other_user->name = 'Paul'; $my_user->save(); $other_user->save(); $my_section = Section::get('apropos'); $my_section->delete(); Obviously, this is not the behavior I was expecting (although the actual behavior also makes sense).. So my question is if you guys know of a mean to get, in the parent class, the name of child class.

    Read the article

  • R selecting duplicate rows

    - by Matt
    Okay, I'm fairly new to R and I've tried to search the documentation for what I need to do but here is the problem. I have a data.frame called heeds.data in the following form (some columns omitted for simplicity) eval.num, eval.count, ... fitness, fitness.mean, green.h.0, green.v.0, offset.0, green.h.1, green.v.1,...green.h.7, green.v.7, offset.7... And I have selected a row meeting the following criteria: best.fitness <- min(heeds.data$fitness.mean[heeds.data$eval.count = 10]) best.row <- heeds.data[heeds.data$fitness.mean == best.fitness] Now, what I want are all of the other rows with that have columns green.h.0 to offset.7 (a contiguous section of columns) equal to the best.row Basically I'm looking for rows that have some of the conditions the same as the "best" row. I thought I could just do this, heeds.best <- heeds.data$fitness[ heeds.data$green.h.0 == best.row$green.h.0 & ... ] But with 24 columns it seems like a stupid method. Looking for something a bit simpler with less manual typing. Thanks!

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >