Search Results

Search found 9183 results on 368 pages for 'analytic functions'.

Page 118/368 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Objects in JavaScript defined and undefined at the same time (in a FireFox extension)

    - by Alexey Romanov
    I am chasing down a bug in a FireFox extension. I've finally managed to see it for myself (I've only had reports before) and I can't understand how what I saw is possible. One error message from my extension in the Error Console is "gBrowser is not defined". This by itself would be surprising enough, since the overlay is over browser.xul and navigator.xul, and I expect gBrowser to be available from both. Even worse is the actual place where it happens: line 101 of nextplease.js. That is, inside the function isTopLevelDocument, which is only called from onContentLoaded, which is only called from onLoad here: gBrowser.addEventListener(this.loadType, function (event) { nextplease.loadListener.onContentLoaded(event); }, true); So gBrowser is defined in onLoad, but somehow undefined in isTopLevelDocument. When I tried to actually use the extension, I got another error: "nextplease is not defined". The interesting thing is that it happened on lines 853 and 857. That is, inside the functions nextplease.getNextLink = function () { nextplease.getLink(window.content, nextplease.NextPhrasesMap, nextplease.NextImagesMap, nextplease.isNextRegExp, nextplease.NEXT_SEARCH_TYPE); } nextplease.getPrevLink = function () { nextplease.getLink(window.content, nextplease.PrevPhrasesMap, nextplease.PrevImagesMap, nextplease.isPrevRegExp, nextplease.PREV_SEARCH_TYPE); } So nextplease is somehow defined enough to call these functions, but isn't defined inside them. Finally, executing typeof(nextplease) in Execute JS returns "object". Same for gBrowser. How can this happen? Any ideas?

    Read the article

  • Getting Argument Names In Ruby Reflection

    - by Joe Soul-bringer
    I would like to do some fairly heavy-duty reflection in the Ruby programming language. I would like to create a function which would return the names of the arguments of various calling functions higher up the call stack (just one higher would be enough but why stop there?). I could use Kernel.caller go to the file and parse the argument list but that would be ugly and unreliable. The function that I would like would work in the following way: module A def method1( tuti, fruity) foo end def method2(bim, bam, boom) foo end def foo print caller_args[1].join(",") #the "1" mean one step up the call stack end end A.method1 #prints "tuti,fruity" A.method2 #prints "bim, bam, boom" I would not mind using ParseTree or some similar tool for this task but looking at Parsetree, it is not obvious how to use it for this purpose. Creating a C extension like this is another possibility but it would be nice if someone had already done it for me. Edit2: I can see that I'll probably need some kind of C extension. I suppose that means my question is what combination of C extension would work most easily. I don't think caller+ParseTree would be enough by themselves. As far as why I would like to do this goes, rather than saying "automatic debugging", perhaps I should say that I would like to use this functionality to do automatic checking of the calling and return conditions of functions. Say def add x, y check_positive return x + y end Where check_positive would throw an exception if x and y weren't positive (obviously, there would be more to it than that but hopefully this gives enough motivation)

    Read the article

  • Delphi - working with dll's for beginners

    - by doubleu
    Hi there, I'm a total newbie regarding to DLL. And I don't need to creat them I just need to use one. I've read some tutorials, but they weren't as helpful as I hoped. Here's the way I started: I've downloaded the SDK which I need to use (ESTOS Tapi Server). I read in the docs and spotted out the DLL which I need to use, which is the ENetSN.dll, and so I registered it. Next I've used the Dependency Walker to take a look at the DLL - and I was wondering because there are only these functions: DllCanUnloadNow, DllGetClassObject, DllRegisterServer and DllUnregisterServer, and these are not the functions mentioned in the docs. I think I have to call DllGetClassObject to get an object out of the DLL with which I can start to work. Unfortunately the tutorials I found doesn't mentioned how this is done (or I didn't understood it). There are also 3 exmaples delivered for VB and C++, but I wasn't able to 'translate' them into delphi. If somebody knows a tutorial where this is explained or could give me a pointer to the right direcetion, I would be very thankful .

    Read the article

  • Using JavaCC to infer semantics from a Composite tree

    - by Skice
    Hi all, I am programming (in Java) a very limited symbolic calculus library that manages polynomials, exponentials and expolinomials (sums of elements like "x^n * e^(c x)"). I want the library to be extensible in the sense of new analytic forms (trigonometric, etc.) or new kinds of operations (logarithm, domain transformations, etc.), so a Composite pattern that represent the syntactic structure of an expression, together with a bunch of Visitors for the operations, does the job quite well. My problem arise when I try to implement operations that depends on the semantics more than on the syntax of the Expression (like integrals, for instance: there are a lot of resolution methods for specific classes of functions, but these same classes can be represented with more than a single syntax). So I thought I need something to "parse" the Composite tree to infer its semantics in order to invoke the right integration method (if any). Someone pointed me to JavaCC, but all the examples I've seen deal only with string parsing; so, I don't know if I'm digging in the right direction. Some suggestions? (I hope to have been clear enough!)

    Read the article

  • When *not* to use prepared statements?

    - by Ben Blank
    I'm re-engineering a PHP-driven web site which uses a minimal database. The original version used "pseudo-prepared-statements" (PHP functions which did quoting and parameter replacement) to prevent injection attacks and to separate database logic from page logic. It seemed natural to replace these ad-hoc functions with an object which uses PDO and real prepared statements, but after doing my reading on them, I'm not so sure. PDO still seems like a great idea, but one of the primary selling points of prepared statements is being able to reuse them… which I never will. Here's my setup: The statements are all trivially simple. Most are in the form SELECT foo,bar FROM baz WHERE quux = ? ORDER BY bar LIMIT 1. The most complex statement in the lot is simply three such selects joined together with UNION ALLs. Each page hit executes at most one statement and executes it only once. I'm in a hosted environment and therefore leery of slamming their servers by doing any "stress tests" personally. Given that using prepared statements will, at minimum, double the number of database round-trips I'm making, am I better off avoiding them? Can I use PDO::MYSQL_ATTR_DIRECT_QUERY to avoid the overhead of multiple database trips while retaining the benefit of parametrization and injection defense? Or do the binary calls used by the prepared statement API perform well enough compared to executing non-prepared queries that I shouldn't worry about it? EDIT: Thanks for all the good advice, folks. This is one where I wish I could mark more than one answer as "accepted" — lots of different perspectives. Ultimately, though, I have to give rick his due… without his answer I would have blissfully gone off and done the completely Wrong Thing even after following everyone's advice. :-) Emulated prepared statements it is!

    Read the article

  • Python3 and ftplib uploading files

    - by Teifion
    My python2 script uploads files nicely using this method but python3 is presenting problems and I'm stuck as to where to go next (googling hasn't helped). from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt', open('myfile.txt')) The error I get is Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storlines('STOR myfile.txt', open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 454, in storbinary conn.sendall(buf) TypeError: must be bytes or buffer, not str I tried altering the code to from ftplib import FTP ftp = FTP(ftp_host, ftp_user, ftp_pass) ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) But instead I got this Traceback (most recent call last): File "/Library/WebServer/CGI-Executables/rob3/functions/cli_f.py", line 12, in upload ftp.storbinary('STOR myfile.txt'.encode('utf-8'), open('myfile.txt')) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 450, in storbinary conn = self.transfercmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 358, in transfercmd return self.ntransfercmd(cmd, rest)[0] File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 329, in ntransfercmd resp = self.sendcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 244, in sendcmd self.putcmd(cmd) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 179, in putcmd self.putline(line) File "/Library/Frameworks/Python.framework/Versions/3.1/lib/python3.1/ftplib.py", line 172, in putline line = line + CRLF TypeError: can't concat bytes to str Can anybody point me in the right direction

    Read the article

  • Writing fortran robust and "modern" code

    - by Blklight
    In some scientific environments, you often cannot go without FORTRAN as most of the developers only know that idiom, and there is lot of legacy code and related experience. And frankly, there are not many other cross-platform options for high performance programming ( C++ would do the task, but the syntax, zero-starting arrays, and pointers are too much for most engineers ;-) ). I'm a C++ guy but I'm stuck with some F90 projects. So, let's assume a new project must use FORTRAN (F90), but I want to build the most modern software architecture out of it. while being compatible with most "recent" compilers (intel ifort, but also including sun/HP/IBM own compilers) So I'm thinking of imposing: global variable forbidden, no gotos, no jump labels, "implicit none", etc. "object-oriented programming" (modules with datatypes + related subroutines) modular/reusable functions, well documented, reusable libraries assertions/preconditions/invariants (implemented using preprocessor statements) unit tests for all (most) subroutines and "objects" an intense "debug mode" (#ifdef DEBUG) with more checks and all possible Intel compiler checks possible (array bounds, subroutine interfaces, etc.) uniform and enforced legible coding style, using code processing tools C stubs/wrappers for libpthread, libDL (and eventually GPU kernels, etc.) C/C++ implementation of utility functions (strings, file operations, sockets, memory alloc/dealloc reference counting for debug mode, etc.) ( This may all seem "evident" modern programming assumptions, but in a legacy fortran world, most of these are big changes in the typical programmer workflow ) The goal with all that is to have trustworthy, maintainable and modular code. Whereas, in typical fortran, modularity is often not a primary goal, and code is trustworthy only if the original developer was very clever, and the code was not changed since then ! (i'm a bit joking here, but not much) I searched around for references about object-oriented fortran, programming-by-contract (assertions/preconditions/etc.), and found only ugly and outdated documents, syntaxes and papers done by people with no large-scale project involvement, and dead projects. Any good URL, advice, reference paper/books on the subject?

    Read the article

  • Multiple WCF calls for a single ASP.NET page load

    - by Rodney Burton
    I have an existing asp.net web application I am redesigning to use a service architecture. I have the beginnings of an WCF service which I am able to call and perform functions with no problems. As far as updating data, it all makes sense. For example, I have a button that says Submit Order, it sends the data to the service, which does the processing. Here's my concern: If I have an ASP.NET page that shows me a list of orders (View Orders page), and at the top I have a bunch of drop down lists for order types, and other search criteria which is populated by querying different tables from the database (lookup tables, etc). I am hoping to eventually completely decouple the web application from the DB, and use data contracts to pass information between the BLL, the SOA, and the web app. With that said, how can I reduce the # of WCF calls needed to load my "View Orders" page? I would need to make 1 call get the list of orders, and 1 call for each drop down list, etc because those are populated by individual functions in my BLL. Is it good architecture to create a web service method that returns back a specialized data contract that consists of everything you would need to display a View Orders page, in 1 shot? Something like this pseudocode: public class ViewOrderPageDTO { public OrderDTO[] Orders { get; set; } public OrderTypesDTO[] OrderTypes { get; set; } public OrderStatusesDTO[] OrderStatuses { get; set; } public CustomerListDTO[] CustomerList { get; set; } } Or is it better practice in the page_load event to make 5 or 6 or even 15 individual calls to the SOA to get the data needed to load the page? Therefore, bypassing the need for specialized wcf methods or DTO's that conglomerate other DTO? Thanks for your input and suggestions.

    Read the article

  • iPhone app rejection for using ICU (Unicode extensions)

    - by nickbit
    I received the following mail form Apple, considering my application: *Thank you for submitting your update to ??µ??es?a to the App Store. During our review of your application we found it is using private APIs, which is in violation of the iPhone Developer Program License Agreement section 3.3.1; "3.3.1 Applications may only use Documented APIs in the manner prescribed by Apple and must not use or call any private APIs." While your application has not been rejected, it would be appropriate to resolve this issue in your next update. The following non-public APIs are included in your application: u_isspace ubrk_close ubrk_current ubrk_first ubrk_next ubrk_open If you have defined methods in your source code with the same names as the above mentioned APIs, we suggest altering your method names so that they no longer collide with Apple's private APIs to avoid your application being flagged with future submissions. Please resolve this issue in your next update to ??µ??es?a. Sincerely, iPhone App Review Team* The functions mentioned in this mail are used in the ICU library (International Components for Unicode). Although my app is not rejected at this point, I don't feel very secure for the future of my app, because it relies heavily on the Unicode protocol and on this components in particular. Another thing is that I do not call these functions directly, but they are called by a custom 'sqlite' build (with FTS3 extensions enabled). Am I missing something here? Any suggestions?

    Read the article

  • How to use include within a function?

    - by mahks
    I have a large function that I wish to load only when it is needed. So I assume using include is the way to go. But I need several support functions as well -only used in go_do_it(). If they are in the included file I get a redeclare error. See example A If I place the support functions in an include_once it works fine, see Example B. If I use include_once for the func_1 code, the second call fails. -func_1 needs include -func_2 needs include_once Why does does include_once fail for func_1, does it get reloaded each time the function is called? Example A: <?php /* main.php */ go_do_it(); go_do_it(); function go_do_it(){ include 'func_1.php'; } ?> <?php /* func_1.php */ echo '<br>Doing it'; nested_func() function nested_func(){ echo ' in nest'; } ?> Example B: <?php /* main.php */ go_do_it(); go_do_it(); function go_do_it(){ include_once 'func_2.php'; include 'func_1.php'; } ?> <?php /* func_1.php */ echo '<br> - doing it'; nested_func(); ?> <?php /* func_2.php */ function nested_func(){ echo ' in nest'; } ?>

    Read the article

  • Is there a PHP benchmark that meets these specific criteria? [closed]

    - by Alex R
    I'm working on a tool which converts PHP code to Scala. As one of the finishing touches, I'm in need of a really good (er, somewhat biased) benchmark. By dumb luck my first benchmark attempt was with some code which uses bcmath extensively, which unfortunately is 1000x slower in Java, making the Scala code 22x slower overall than the original PHP. So I'm looking for some meaningful PHP benchmark with the following characteristics: The PHP source needs to be in a single file. It should solve a real-world problem. No silly looping over empty methods etc. I need it to be simple to setup - no databases, hard-to-find input files, etc. Simple text input and output preferred. It should not use features that are slow in Java (BigInteger, trigonometric functions, etc). It should not use exoteric or dynamic PHP functions (e.g. no "eval" or "variable vars"). It should not over-rely on built-in libraries, e.g. MD5, crypt, etc. It should not be I/O bound. A CPU-bound memory-hungry algorithm is preferred. Basically, intensive OO operations, integer and string manipulation, recursion, etc would be great. Thanks

    Read the article

  • subtotals in columns usind reshape2 in R

    - by user1043144
    I have spent some time now learning RESHAPE2 and plyr but I still do not get it. This time I have a problem with (a) subtotals and (b) passing different aggregate functions . Here an example using data from the excellent tutorial on the blog of mrdwab http://news.mrdwab.com/ # libraries library(plyr) library(reshape2) # get data and add few more variables book.sales = read.csv("http://news.mrdwab.com/data-booksales") book.sales$Stock = book.sales$Quantity + 10 book.sales$SubjCat[(book.sales$Subject == 'Economics') | (book.sales$Subject == 'Management') ] <- '1_EconSciences' book.sales$SubjCat[book.sales$Subject %in% c('Anthropology', 'Politics', 'Sociology') ] <- '2_SocSciences' book.sales$SubjCat[book.sales$Subject %in% c('Communication', 'Fiction', 'History', 'Research', 'Statistics') ] <- '3_other' # to get to my starting dataframe (close to the project I am working on) book.sales1 <- ddply(book.sales, c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), summarize, Stock = sum(Stock), Sold = sum(Quantity), Ratio = round((100 * sum(Quantity)/ sum(Stock)), digits = 1)) #melt it m.book.sales = melt(data = book.sales1, id.vars = c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), measured.vars = c('Stock', 'Sold', 'Ratio')) # cast it Tab1 <- dcast(data = m.book.sales, formula = Region + Representative ~ Publisher + variable, fun.aggregate = sum, margins = c('Region', 'Representative')) Now my questions : I have been able to add the subtotals in rows. But is it possible also to add margins in the columns. Say for example, Totals of Stock for one Publisher ? Sorry I meant to say example total sold for all publishers There is a problem with the columns with “ratio”. How can I get “mean” instead of “sum” for this variable ? P.S: I have seen some examples using reshape. Will you recommend to use it instead of reshape2 (which seems not to include the functionalities of two functions).

    Read the article

  • Complex derived attributes in Django models

    - by rabidpebble
    What I want to do is implement submission scoring for a site with users voting on the content, much like in e.g. reddit (see the 'hot' function in http://code.reddit.com/browser/sql/functions.sql). My submission model currently keeps track of up and down vote totals. Currently, when a user votes I create and save a related Vote object and then use F() expressions to update the Submission object's voting totals. The problem is that I want to update the score for the submission at the same time, but F() expressions are limited to only simple operations (it's missing support for log(), date_part(), sign() etc.) From my limited experience with Django I can see 4 options here: extend F() somehow (haven't looked at the code yet) to support the missing SQL functions; this is my preferred option and seems to fit within the Django framework the best define a scoring function (much like reddit's 'hot' function) in my database, and have Django use the value of that function for the value of the score field; as far as I can tell, #2 is not possible wrap my two step voting process in a suitably isolated transaction so that I can calculate the voting totals in Python and then update the Submission's voting totals without fear that another vote against the submission could be added/changed in the meantime; I'm hesitant to take this route because it seems overly complex - what is a "suitably isolated transaction" in this case anyway? use raw SQL; I would prefer to avoid this entirely -- what's the point of an ORM if I have to revert to SQL for such a common use case as this! (Note that this coming from somebody who loves sprocs, but is using Django for ease of development.) Before I embark on this mission to extend F() (which I'm not sure is even possible), am I about to reinvent the wheel? Is there a more standard way to do this? It seems like such a common use case and yet in an hour of searching I have yet to find a common solution...

    Read the article

  • C# COM Cross Thread problem

    - by user364676
    Hi, we're developing a software to control a scientific measuring device. it provides a COM-Interface defines serveral functions to set measurement parameters and fires an event when it measured data. in order to test our software, i'm implementing a simulation of that device. the com-object runs a loop which periodically fires the event. another loop in the client app should now setup up the com-simulator using the given functions. i created a class for measuring parameters which will be instanciated when setting up a new measurement. // COM-Object public class MeasurementParams { public double Param1; public double Param2; } public class COM_Sim : ICOMDevice { public MeasurementParams newMeasurement; IClient client; public int NewMeasurement() { newMeasurment = new MeasurementParam(); } public int SetParam1(double val) { // why is newMeasurement null when method is called from client loop newMeasurement.Param1 = val; } void loop() { while(true) { // fire event client.HandleEvent; } } } public class Client : IClient { ICOMDevice server; public int HandleEvent() { // handle this event server.NewMeasurement(); server.SetParam1(0.0); } void loop() { while(true) { // do some stuff... server.NewMeasurement(); server.SetParam1(0.0); } } } both of the loops run in independent threads. when server.NewMeasurement() is called, the object on the server is set to a new instance. but in the next function, the object is null again. do the same when handling the server-event, it works perfectly, because the method runs in the servers thread. how to make it work from client-thread as well. as the client is meant to be working with the real device, i cannot modify the interfaces given by the manufactor. also i need to setup measurements independent from the event-handler, which will be fired not regulary. i assume this problem related to multithreaded-COM behavior but i found nothing on this topic.

    Read the article

  • performance of large number calculations in python (python 2.7.3 and .net 4.0)

    - by g36
    There is a lot of general questions about python performance in comparison to other languages. I've got more specific example: There are two simple functions wrote in python an c#, both checking if int number is prime. python: import time def is_prime(n): num =n/2 while num >1: if n % num ==0: return 0 num-=1 return 1 start = time.clock() probably_prime = is_prime(2147483629) elapsed = (time.clock() - start) print 'time : '+str(elapsed) and C#: using System.Diagnostics; public static bool IsPrime(int n) { int num = n/2; while(num >1) { if(n%num ==0) { return false; } num-=1; } return true; } Stopwatch sw = new Stopwatch(); sw.Start(); bool result = Functions.IsPrime(2147483629); sw.Stop(); Console.WriteLine("time: {0}", sw.Elapsed); And times ( which are surprise for me as a begginer in python:)): Python: 121s; c#: 6s Could You explain where does this big diffrence come from ?

    Read the article

  • Ways to gain a deeper understanding of programming concepts?

    - by MrPlow
    I'm a marketer and have been messing around in PHP/MySQL for years. Recently (the last several months) I've been making my own scripts/programs in Python and I've really enjoyed the whole problem solving process. I've read(skimmed) some books and understand the basics of OOP, polymorphism, etc.. I have a general interest in AI and Natural Language in particular but it seems these things require a masters in Computer Science. My knowledge of math is poor. The last class I took was calculus, and I've forgotten the majority of it. Basically I'm looking for things to learn that will help me think in a more analytic way, and maybe see solutions where I didn't before. Improving my ability to program in Python would be nice too. I don't need to learn a specific language or something for employment, just enjoyment. Although my work often involves web development so some utility would be nice. I don't like learning concepts by just reading them. I need to apply them, even if the examples are contrived. A recommendation of a couple good books or other resources would be nice. :) Apologies if this is too vague/misplaced...

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

  • Haskell math performance

    - by Travis Brown
    I'm in the middle of porting David Blei's original C implementation of Latent Dirichlet Allocation to Haskell, and I'm trying to decide whether to leave some of the low-level stuff in C. The following function is one example—it's an approximation of the second derivative of lgamma: double trigamma(double x) { double p; int i; x=x+6; p=1/(x*x); p=(((((0.075757575757576*p-0.033333333333333)*p+0.0238095238095238) *p-0.033333333333333)*p+0.166666666666667)*p+1)/x+0.5*p; for (i=0; i<6 ;i++) { x=x-1; p=1/(x*x)+p; } return(p); } I've translated this into more or less idiomatic Haskell as follows: trigamma :: Double -> Double trigamma x = snd $ last $ take 7 $ iterate next (x' - 1, p') where x' = x + 6 p = 1 / x' ^ 2 p' = p / 2 + c / x' c = foldr1 (\a b -> (a + b * p)) [1, 1/6, -1/30, 1/42, -1/30, 5/66] next (x, p) = (x - 1, 1 / x ^ 2 + p) The problem is that when I run both through Criterion, my Haskell version is six or seven times slower (I'm compiling with -O2 on GHC 6.12.1). Some similar functions are even worse. I know practically nothing about Haskell performance, and I'm not terribly interested in digging through Core or anything like that, since I can always just call the handful of math-intensive C functions through FFI. But I'm curious about whether there's low-hanging fruit that I'm missing—some kind of extension or library or annotation that I could use to speed up this numeric stuff without making it too ugly.

    Read the article

  • Nusphere PHPEd: PHP Function Hints Lost Arguments?

    - by Eli
    Hi All, My PHPEd suddenly stopped showing arguments and arg order in the hints, and now just shows a basic description of the function. Before I go digging around in the config files, has anyone else had this problem? Thanks! Edit: Sorry, I may not have been entirely clear on this. There is no problem with my own classes, only with the actual php functions. Example: How it used to work: I type a PHP function, say strpos. As soon as I type the '(' at the end of it, I get the little yellow box, showing something like this: int strpos ( string $haystack , mixed $needle [, int $offset=0 ] ) with the first argument bold. If I type it, and then a comma, it bolds the second arg, and so on. This is really nice, since PHP functions are a bit scrambled as far as argument order, and I don't have to look them up every time. How it works now: I type a php function, say strpos. As soon as I type the '(' at the end of it, I get the little yellow box. It says something like "strpos - Returns the numeric position of the first occurrence of needle in the haystack string." There are no arguments shown, which makes the little box basically worthless - I know what strpos does, I just want a reminder of the argument order. I think this may be a problem with the included PHPDoc, which I never use, but may be the source of the data for the hint box. I did recently upgrade to 5.6, but ended up removing it and restoring 5.2. I installed to a different folder, and uninstalled from there, but it may have overwritten something in the original folder? I'm using v5.2 (5220). Thanks!

    Read the article

  • What am i doing wrong

    - by Erik Sapir
    I have the following code. I need B class to have a min priority queue of AToTime objects. AToTime have operator, and yet i receive error telling me than there is no operator matching the operands... #include <queue> #include <functional> using namespace std; class B{ //public functions public: B(); virtual ~B(); //private members private: log4cxx::LoggerPtr m_logger; class AToTime { //public functions public: AToTime(const ACE_Time_Value& time, const APtr a) : m_time(time), m_a(a){} bool operator >(const AToTime& other) { return m_time > other.m_time; } //public members - no point using any private members here public: ACE_Time_Value m_time; APtr m_a; }; priority_queue<AToTime, vector<AToTime>, greater<AToTime> > m_myMinHeap; };

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Multiplication of 2 positive numbers giving a negative result

    - by krandiash
    My program is an implementation of a bloom filter. However, when I'm storing my hash function results in the bit array, the function (of the form f(i) = (a*i + b) % m where a,b,i,m are all positive integers) is giving me a negative result. The problem seems to be in the calculation of a*i which is coming out to be negative. Ignore the print statements in the code; those were for debugging. Basically, the value of temp in this block of code is coming out to be negative and so I'm getting an ArrayOutOfBoundsException. m is the bit array length, z is the number of hash functions being used, S is the set of values which are members of this bloom filter and H stores the values of a and b for the hash functions f1, f2, ..., fz. public static int[] makeBitArray(int m, int z, ArrayList<Integer> S, int[] H) { int[] C = new int[m]; for (int i = 0; i < z; i++) { for (int q = 0; q < S.size() ; q++) { System.out.println(H[2*i]); int temp = S.get(q)*(H[2*i]); System.out.println(temp); System.out.println(S.get(q)); System.out.println(H[2*i + 1]); System.out.println(m); int t = ((H[2*i]*S.get(q)) + H[2*i + 1])%m; System.out.println(t); C[t] = 1; } } return C; } Any help is appreciated.

    Read the article

  • Excel Automation Addin UDFs not accesible

    - by Eric
    I created the following automation addin: namespace AutomationAddin { [Guid("6652EC43-B48C-428a-A32A-5F2E89B9F305")] [ClassInterface(ClassInterfaceType.AutoDual)] [ComVisible(true)] public class MyFunctions { public MyFunctions() { } #region UDFs public string ToUpperCase(string input) { return input.ToUpper(); } #endregion [ComRegisterFunctionAttribute] public static void RegisterFunction(Type type) { Registry.ClassesRoot.CreateSubKey( GetSubKeyName(type, "Programmable")); RegistryKey key = Registry.ClassesRoot.OpenSubKey( GetSubKeyName(type, "InprocServer32"), true); key.SetValue("", System.Environment.SystemDirectory + @"\mscoree.dll", RegistryValueKind.String); } [ComUnregisterFunctionAttribute] public static void UnregisterFunction(Type type) { Registry.ClassesRoot.DeleteSubKey( GetSubKeyName(type, "Programmable"), false); } private static string GetSubKeyName(Type type, string subKeyName) { System.Text.StringBuilder s = new System.Text.StringBuilder(); s.Append(@"CLSID\{"); s.Append(type.GUID.ToString().ToUpper()); s.Append(@"}\"); s.Append(subKeyName); return s.ToString(); } } } I build it and it registers just fine. I open excel 2003, go to tools-Add-ins, click on the automation button and the addin appears in the list. I add it and it shows up in the addins list. but, the functions themselves don't appear. If I type it in it doesn't work and if I look in the function wizard, my addin doesn't show up as a category and the functions are not in the list. I am using excel 2003 on windows 7 x86. I built the project with visual studio 2010. This addin worked fine on windows xp built with visual studio 2008.

    Read the article

  • Calculating confidence intervals for a non-normal distribution

    - by Josiah
    Hi all, First, I should specify that my knowledge of statistics is fairly limited, so please forgive me if my question seems trivial or perhaps doesn't even make sense. I have data that doesn't appear to be normally distributed. Typically, when I plot confidence intervals, I would use the mean +- 2 standard deviations, but I don't think that is acceptible for a non-uniform distribution. My sample size is currently set to 1000 samples, which would seem like enough to determine if it was a normal distribution or not. I use Matlab for all my processing, so are there any functions in Matlab that would make it easy to calculate the confidence intervals (say 95%)? I know there are the 'quantile' and 'prctile' functions, but I'm not sure if that's what I need to use. The function 'mle' also returns confidence intervals for normally distributed data, although you can also supply your own pdf. Could I use ksdensity to create a pdf for my data, then feed that pdf into the mle function to give me confidence intervals? Also, how would I go about determining if my data is normally distributed. I mean I can currently tell just by looking at the histogram or pdf from ksdensity, but is there a way to quantitatively measure it? Thanks!

    Read the article

  • Handling missing/incomplete data in R--is there function to mask but not remove NAs?

    - by doug
    As you would expect from a DSL aimed at data analysis, R handles missing/incomplete data very well, for instance: Many R functions have an 'na.rm' flag that you can set to 'T' to remove the NAs: mean( c(5,6,12,87,9,NA,43,67), na.rm=T) But if you want to deal with NAs before the function call, you need to do something like this: to remove each 'NA' from a vector: vx = vx[!is.na(a)] to remove each 'NA' from a vector and replace it w/ a '0': ifelse(is.na(vx), 0, vx) to remove entire each row that contains 'NA' from a data frame: dfx = dfx[complete.cases(dfx),] All of these functions permanently remove 'NA' or rows with an 'NA' in them. Sometimes this isn't quite what you want though--making an 'NA'-excised copy of the data frame might be necessary for the next step in the workflow but in subsequent steps you often want those rows back (e.g., to calculate a column-wise statistic for a column that has missing rows caused by a prior call to 'complete cases' yet that column has no 'NA' values in it). to be as clear as possible about what i'm looking for: python/numpy has a class, 'masked array', with a 'mask' method, which lets you conceal--but not remove--NAs during a function call. Is there an analogous function in R?

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >