Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 621/933 | < Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >

  • Warning: non-integer #successes in a binomial glm! (survey packages)

    - by longrob
    I am using the twang package to create propensity scores, which are used as weigtings in a binomial glm using survey::svyglm. The code looks something like this: pscore <- ps(ppci ~ var1+var2+.........., data=dt....) dt$w <- get.weights(pscore, stop.method="es.mean") design.ps <- svydesign(ids=~1, weights=~w, data=dt,) glm1 <- svyglm(m30 ~ ppci, design=design.ps,family=binomial) This produces the following warning: Warning message: In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! Does anyone know what I could be doing wrong ? I wasn't sure if this message would be better on stats.SE, but on balance I thought I would try here first.

    Read the article

  • Storing tree data in Javascript

    - by Ozh
    I need to store data to represent this: Water + Fire = Steam Water + Earth = Mud Mud + Fire = Rock The goal is the following: I have draggable HTML divs, and when <div id="Fire"> and <div id="Mud"> overlap, I add <div id="Rock"> to the screen. Ever played Alchemy on iPhone or Android? Same stuff Right now, the way I'm doing this is a JS object : var stuff = { 'Steam' : { needs: [ 'Water', 'Fire'] }, 'Mud' : { needs: [ 'Water', 'Earth'] }, 'Rock' : { needs: [ 'Mud', 'Fire'] }, // etc... }; and every time a div overlaps with another one, I traverse the object keys and check the 'needs' array. I can deal with that structure but I was wondering if I could do any better? Edit: I should add that I also need to store a few other things, like a short description or an icon name. So typicall I have Steam: { needs: [ array ], desc: "short desc", icon:"steam.png"},

    Read the article

  • How to check if canvas objects overlap each other

    - by ?????? ???????
    I'm trying to check if two objects (e.g. a rectangle and a triangle) on a HTML5 canvas are overlapping each other. Currently I can only check that by looking at the screen (having set globalCompositeOperation='lighter'). My first idea would have been to scan all over the canvas if the "lighter" (compare code snippet above) color exists in the canvas. But therefor I would have to look at every single pixel which was rather costly for what I need. Is there a (better) alternative to automatically check if they are overlapping? Best regards.

    Read the article

  • Getting the value from an array in the model in rails

    - by slythic
    Hi all, I have a relatively simple problem. I have a model named Item which I've added a status field. The status field will only have two options (Lost or Found). So I created the following array in my Item model: STATUS = [ [1, "Lost"], [2, "Found"]] In my form view I added the following code which works great: <%= collection_select :item, :status, Item::STATUS, :first, :last, {:include_blank => 'Select status'} %> This stores the numeric id (1 or 2) of the status in the database. However, in my show view I can't figure out how to convert from the numeric id (again, 1 or 2) to the text equivalent of Lost or Found. Any ideas on how to get this to work? Is there a better way to go about this? Many thanks, Tony

    Read the article

  • Transforming large Xml files

    - by Chad
    I was using this extension method to transform very large xml files with an xslt. Unfortunately, I get an OutOfMemoryException on the source.ToString() line. I realize there must be a better way, I'm just not sure what that would be? public static XElement Transform(this XElement source, string xslPath, XsltArgumentList arguments) { var doc = new XmlDocument(); doc.LoadXml(source.ToString()); var xsl = new XslCompiledTransform(); xsl.Load(xslPath); using (var swDocument = new StringWriter(System.Globalization.CultureInfo.InvariantCulture)) { using (var xtw = new XmlTextWriter(swDocument)) { xsl.Transform((doc.CreateNavigator()), arguments, xtw); xtw.Flush(); return XElement.Parse(swDocument.ToString()); } } } Thoughts? Solutions? Etc.

    Read the article

  • Automating builds from subversion tags

    - by Ajaxx
    I'm trying to automate the build process for engineering group. As part of that automation, I'm trying to get to a point where the act of applying a specific tag that adheres to a pattern will kick off an automated process that will do the following: Check out source code Create a build script from a template Build the project I'm pretty certain I could do this with a post-hook in subversion, but I'm trying to figure out a way to do this with something other than a subversion hook. Would it make sense to monitor the tags directory in the subversion repository to kick off my workflow? Are there any decent tools that help with this (.NET would be great if possible). Am I better off just writing an engine to do this? My preferences: Existing product that does all or part of this If development work needs to occur, .NET is preferable Works with Windows (we've got a Linux based repo, but builds all occur on windows)

    Read the article

  • Flash Media Server dynamic file naming

    - by flying_tiger
    I'm trying to figure out most efficient/safe way to name recorded streams on FMS. The case is to get listing of recorded streams from the server (eg. rec_001, rec_002...) and dynamically add rec_003 filename to the new stream that is being recorded. I'm thinking about either using FMS File Object and put everything in array of files every time I start recording procedure or to create XML file that would serve as a database of file names. I'm searching for a solution efficient for MULTIPLE connections at a time and large amount of files. Which one of presented would be the best for this purpose? Or do you have any better suggestions of solving this problem?

    Read the article

  • Infor PM (Business Intelligence solution)

    - by Andrew
    We are currently implementing the commercial Infor PM (Performance Management) package as a business intelligence tool. Infor PM website It is apparently used by over 1,000 companies around the world, but I have found scant information about it on the net except for what's on their own website. It covers the whole range of data warehousing and BI functions with: an OLAP environment an ETL tool a report writer (called Application Studio) an add-on to Excel to connect to the data in the cubes through a pivot table etc Does anyone have any experience with using this package? How does it compare to the big players in BI (Cognos, Microsoft SSAS, Business Objects, etc). Any pitfalls I should know about? On the other hand, does it do anything better than its competitors?

    Read the article

  • Help with a query

    - by stackoverflowuser
    Hi Based on the following table ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 4 10 B 5 4 B 6 1 B 7 10 C 8 3 C 9 30 C I want to check if the total effort against a name is less than 40 then add a row with effort = 40 - (Total Effort) for the name. The ID of the new row can be anything. If the total effort is greater than 40 then trucate the data for one of the rows to make it 40. So after applying the logic above table will be ID Effort Name ------------------------- 1 1 A 2 1 A 3 8 A 10 30 A 4 10 B 5 4 B 6 1 B 11 25 B 7 10 C 8 3 C 9 27 C I was thinking of opening a cursor, keeping a counter of the total effort, and based on the logic insert existing and new rows in another temporary table. I am not sure if this is an efficient way to deal with this. I would like to learn if there is a better way.

    Read the article

  • What's the most efficient way to repeatedly remove leading text using Vim?

    - by John Topley
    What's the most efficient way to remove the text 2010-04-07 14:25:50,773 DEBUG This is a debug log statement - from a log file like the extract below using Vim? 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 9,8 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 1,11 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 5,2 2010-04-07 14:25:50,772 DEBUG This is a debug log statement - 8,4 This is what the result should look like: 9,8 1,11 5,2 8,4 Note that on this occasion I'm using gVim on Windows, so please don't suggest any UNIX programs which may be better suited to the task—I have to do it using Vim.

    Read the article

  • Problem with NHibernate and saving - NHibernate doesn't detect changes and uses old values.

    - by Vilx-
    When I do this: Cat x = Session.Load<Cat>(123); x.Name = "fritz"; Session.Flush(); NHibernate detects the change and UPDATEs the DB. But, when I do this: Cat x = new Cat(); Session.Save(x); x.Name = "fritz"; Session.Flush(); I get NULL for name, because that's what was there when I called Session.Save(). Why doesn't NHibernate detect the changes - or better yet, take the values for the INSERT statement at the time of Flush()?

    Read the article

  • Badword filter in PHP?

    - by morpheous
    I am writing a badword filter in PHP. I have a list of badwords in an array and the method cleanse_text() is written like this: public static function cleanse_text($originalstring){ if (!self::$is_sorted) self::doSort(); return str_ireplace(self::$badwords, '****', $originalstring); } This works trivially, for exact matches, but I wanted to also censor words that have been disguised like 'ab*d' where 'abcd' is a bad word. This is proving to be a bit more difficult. Here are my questions: Is a badword filter worth bothering with (it is a site for professionals so a certain minimum decorum is required - I would have thought) Is it worth the hustle of trying to capture obvious work arounds like 'f*ck' - or should I not attempt to filter those out. Is there a better way of writing the cleanse_text() method above?

    Read the article

  • Are indexes good or bad for a large database?

    - by gmemon
    Hello All, I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes. I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not Thanks

    Read the article

  • Safe to cast pointer to a forward-declared class to its true base class in C++?

    - by Matt DiMeo
    In one header file I have: #include "BaseClass.h" // a forward declaration of DerivedClass, which extends class BaseClass. class DerivedClass ; class Foo { DerivedClass *derived ; void someMethod() { // this is the cast I'm worried about. ((BaseClass*)derived)->baseClassMethod() ; } }; Now, DerivedClass is (in its own header file) derived from BaseClass, but the compiler doesn't know that at the time it's reading the definition above for class Foo. However, Foo refers to DerivedClass pointers and DerivedClass refers to Foo pointers, so they can't both know each other's declaration. First question is whether it's safe (according to C++ spec, not in any given compiler) to cast a derived class pointer to its base class pointer type in the absence of a full definition of the derived class. Second question is whether there's a better approach. I'm aware I could move someMethod()'s body out of the class definition, but in this case it's important that it be inlined (part of an actual, measured hotspot - I'm not guessing).

    Read the article

  • ASP.NET, Visual Studio and Subversion - how to integrate?

    - by Michael Stum
    I use AnkhSVN and Visual Studio 2005 and 2008. Now, one thing that bugs me is that Ankh does not really work with ASP.NET sites. I cannot add them properly to a repository and it won't detect changes, especially because the site is on a remote server accessed through Frontpage Extensions (File = Open Site). What are the alternatives? Does a better plug-in exist? Manually downloading the files through FTP and using TortoiseSVN or svn.exe is not really the level of integration I want :) I want to stay within the Visual Studio IDE when possible. Also, I do not control the remote Server, so I can not install anything on it, which means the whole change tracking/comparison to repository has to be done on my machine.

    Read the article

  • Force reload/refresh when pressing the back button

    - by FlyingCat
    I will try my best to explain this. I have an application that show the 50+ projects in my view page. The user can click the individual project and go to the update page to update the project information. Everything works fine except that after user finish updating the individual project information and hit 'back' button on the browser to the previous view page. The old project information (before update) is still there. The user has to hit refresh to see the updated information. It not that bad, but I wish to provide better user experience. Any idea to fix this? Thanks a lot.

    Read the article

  • Regexp in iOS to find comments

    - by SteveDolphin23
    I am trying to find and process 'java-style' comments within a string in objective-C. I have a few regex snippets which almost work but I am stuck on one hurdle: different options seem to make the different styles work. For example, I am using this to match: NSArray* matches = [[NSRegularExpression regularExpressionWithPattern:expression options:NSRegularExpressionAnchorsMatchLines error:nil] matchesInString:string options:0 range:searchRange]; The options here allow me successfully find and process single line comments (//) but not multiline (/* */), if I change the option to NSRegularExpressionDotMatchesLineSeparators then I can make multiline work fine but I can't find the 'end' of a single line comment. I suppose really I need dot-matches-line-separators but I need a better way of finding the end of a single line comment? The regexp I have so far are: @"/\\*.*?\\*/" @"//.*$" it's clear to see if dot matches a line separator then the second one (single line) never 'finishes' but how do I fix this? I found some suggestions for single line that were more like: @"(\/\/[^"\n\r]*(?:"[^"\n\r]*"[^"\n\r]*)*[\r\n])" But that doesn't' seem to work at all! Thanks in advance for any pointers.

    Read the article

  • Can using non primitive Integer/ Long datatypes too frequently in the application, hurt the performance??

    - by Marcos
    I am using Long/Integer data types very frequently in my application, to build Generic datatypes. I fear that using these wrapper objects instead of primitive data types may be harmful for performance since each time it needs to create objects which is an expensive operation. but also it seems that I have no other choice(when I have to use primtives with generics) rather than just using them. However, still it would be great if you can suggest if there is anything I could do to make it better. or any way if I could just avoid it ?? Also What may be the downsides of this ? Suggestions welcomed!

    Read the article

  • Problem when using \LaTeX \includegraphics with some PDF files

    - by brandstaetter
    I noticed some strange effects when including existing pdf graphics in my laTeX documents: Most file work flawlessly, but some PDFs that were created on a different machine (or from the web) cause the whole page on which they are embedded to become ever-so-slightly distorted. I only notice the difference in a side-by-side comparison, but once you see it, it's obvious. The text layout seems slightly broken, and when you zoom in you can see it better. I will try to make some screenshots to further elaborate, but in the meantime: Has anyone seen this before and how can I get rid of these distortions?

    Read the article

  • C# move file as soon as it becomes available.

    - by m0s
    Hi, I need to accomplish the following task: Attempt to move a file. If file is locked schedule for moving as soon as it becomes available. I am using File.Move which is sufficient for my program. Now the problems are that: 1) I can't find a good way to check if the file I need to move is locked. I am catching System.IO.IOException but reading other posts around I discovered that the same exception may be thrown for different reasons as well. 2) Determining when the file gets unlocked. One way of doing this is probably using a timer/thread and checking the scheduled files lets say every 30 seconds and attempting to move them. But I hope there is a better way using FileSystemWatcher. This is a .net 3.5 winforms application. Any comments/suggestions are appreciated. Thanks for attention.

    Read the article

  • Accessing "current class" from WPF custom MarkupExtension

    - by chaiguy
    I'm attempting to write a custom MarkupExtension to make my life easier by giving me a better way to specify bindings in XAML. However I would like to know if there is any way I can access the object that represents the file the MarkupExtension is used in. In other words, suppose I have a UserControl that defines a particular rendition of a data model of my program. This control has lots of visual stuff like grids, borders and general layout. If I use my MarkupExtension on a particular property of some element in this UserControl, I want to access the instance of the UserControl, without knowing what type it is (I plan on using reflection). Is this at all possible?

    Read the article

  • move data from one table to another, postgresql edition

    - by IggShaman
    Hi All, I'd like to move some data from one table to another (with a possibly different schema). Straightforward solution that comes into mind is - start a transaction with serializable isolation level; INSERT INTO dest_table SELECT data FROM orig_table,other-tables WHERE <condition>; DELETE FROM orig_table USING other-tables WHERE <condition>; COMMIT; Now what if the amount of data is rather big, and the <condition> is expensive to compute? In PostgreSQL, a RULE or a stored procedure can be used to delete data on the fly, evaluating condition only once. Which solution is better? Are there other options?

    Read the article

  • Best way to check for nullable bool in a condition expression (if ...)

    - by FireSnake
    I was wondering what was the most clean and understandable syntax for doing condition checks on nullable bools. Is the following good or bad coding style? Is there a way to express the condition better/more cleanly? bool? nullableBool = true; if (nullableBool ?? false) { ... } else { ... } especially the if (nullableBool ?? false) part. I don't like the if (x.HasValue && x.Value) style ... (not sure whether the question has been asked before ... couldn't find something similar with the search)

    Read the article

  • Are there any GOOD javascript addins for Visual Studio?

    - by Jeremy B.
    In our daily work we maintain some rather large Javascript libaries. We use VS2008 and while they made some improvements to the Javascript IDE, I still find it lacking. There is no outlining, no collapsing, or other ways to keep the code organized. I have tried js-addin and JSLint which crash and don't have the features I want, respectively. I have actually gone as far as running Aptana Studio as their Javascript IDE is much better than what I can get out of Visual Studio. I'm getting tired of having to maintain 2 IDE's. Is there anything out there that can make Javascript editing less painful in Visual Studio 2008? (We don't have the option of 2010 yet).

    Read the article

  • How to efficiently convert DataSet.Tables to List<DataTable>?

    - by Soenhay
    I see many posts about converting the table(s) in a DataSet to a list of DataRows or other row data but I was unable to find anything about this question. This is what I came up with using in .Net 3.0: internal static List<DataTable> DataSetToList(DataSet ds) { List<DataTable> result = new List<DataTable>(); foreach (DataTable dtbl in ds.Tables) { result.Add(dtbl); } return result; } Is there a better way, excluding an extension method? Thanks

    Read the article

< Previous Page | 617 618 619 620 621 622 623 624 625 626 627 628  | Next Page >