Search Results

Search found 210 results on 9 pages for 'integral'.

Page 2/9 | < Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >

  • Haskell: Defaulting constraints to type

    - by yairchu
    Consider this example: applyKTimes :: Integral i => i -> (a -> a) -> a -> a applyKTimes 0 _ x = x applyKTimes k f x = applyKTimes (k-1) f (f x) applyThrice :: (a -> a) -> a -> a applyThrice = applyKTimes 3 The 3 in applyThrice is defaulted by GHC to an Integer as shown when compiling with -Wall: Warning: Defaulting the following constraint(s) to type 'Integer' 'Integral t' arising from a use of 'applyKTimes' So I guess that Integer is the default Integral a => a. Is there a way to define "default types" for other constraints too? Is using default types bad practice? (it does complain when using -Wall..)

    Read the article

  • Calculate posterior distribution of unknown mis-classification with PRTools in MATLAB

    - by Samuel Lampa
    I'm using the PRTools MATLAB library to train some classifiers, generating test data and testing the classifiers. I have the following details: N: Total # of test examples k: # of mis-classification for each classifier and class I want to do: Calculate and plot Bayesian posterior distributions of the unknown probabilities of mis-classification (denoted q), that is, as probability density functions over q itself (so, P(q) will be plotted over q, from 0 to 1). I have that (math formulae, not matlab code!): P(q|k,N) = Posterior * Prior / Normalization constant = P(k|q,N) * P(q|N) / P(k|N) The prior is set to 1, so I only need to calculate the posterior and normalization constant. I know that the posterior can be expressed as (where B(N,k) is the binomial coefficient): P(k|q,N) = B(N,k) * q^k * (1-q)^(N-k) ... so the Normalization constant is simply an integral of the posterior above, from 0 to 1: P(k|N) = B(N,k) * integralFromZeroToOne( q^k * (1-q)^(N-k) ) (The Binomial coefficient ( B(N,k) ) can be omitted thoughappears in both the posterior and normalization constant, so it can be omitted.) Now, I've heard that the integral for the normalization constant should be able to be calculated as a series ... something like: k!(N-k)! / (N+1)! Is that correct? (I have some lecture notes from with this series, but can't figure out if it is for the normalization constant integral, or for the posterior distribution of mis-classification (q)) Also, hints are welcome as how to practically calculate this? (factorials are easily creating truncation errors right?) ... AND, how to practically calculate the final plot (the posterior distribution over q, from 0 to 1).

    Read the article

  • Problem with circular definition in Scheme

    - by user8472
    I am currently working through SICP using Guile as my primary language for the exercises. I have found a strange behavior while implementing the exercises in chapter 3.5. I have reproduced this behavior using Guile 1.4, Guile 1.8.6 and Guile 1.8.7 on a variety of platforms and am certain it is not specific to my setup. This code works fine (and computes e): (define y (integral (delay dy) 1 0.001)) (define dy (stream-map (lambda (x) x) y)) (stream-ref y 1000) The following code should give an identical result: (define (solve f y0 dt) (define y (integral (delay dy) y0 dt)) (define dy (stream-map f y)) y) (solve (lambda (x) x) 1 0.001) But it yields the error message: standard input:7:14: While evaluating arguments to stream-map in expression (stream-map f y): standard input:7:14: Unbound variable: y ABORT: (unbound-variable) So when embedded in a procedure definition, the (define y ...) does not work, whereas outside the procedure in the global environment at the REPL it works fine. What am I doing wrong here? I can post the auxiliary code (i.e., the definitions of integral, stream-map etc.) if necessary, too. With the exception of the system-dependent code for cons-stream, they are all in the book. My own implementation of cons-stream for Guile is as follows: (define-macro (cons-stream a b) `(cons ,a (delay ,b)))

    Read the article

  • C Programming. How to deep copy a struct?

    - by user69514
    I have the following two structs where "child struct" has a "rusage struct" as an element. Then I create two structs of type "child" let's call them childA and childB How do I copy just the rusage struct from childA to childB? typedef struct{ int numb; char *name; pid_t pid; long userT; long systemT; struct rusage usage; }child; typedef struct{ struct timeval ru_utime; /* user time used */ struct timeval ru_stime; /* system time used */ long ru_maxrss; /* maximum resident set size */ long ru_ixrss; /* integral shared memory size */ long ru_idrss; /* integral unshared data size */ long ru_isrss; /* integral unshared stack size */ long ru_minflt; /* page reclaims */ long ru_majflt; /* page faults */ long ru_nswap; /* swaps */ long ru_inblock; /* block input operations */ long ru_oublock; /* block output operations */ long ru_msgsnd; /* messages sent */ long ru_msgrcv; /* messages received */ long ru_nsignals; /* signals received */ long ru_nvcsw; /* voluntary context switches */ long ru_nivcsw; /* involuntary context switches */ }rusage; I did the following, but I guess it copies the memory location, because if I changed the value of usage in childA, it also changes in childB. memcpy(&childA,&childB, sizeof(rusage)); I know that gives childB all the values from childA. I have already taken care of the others fields in childB, I just need to be able to copy the rusage struct called usage that resides in the "child" struct.

    Read the article

  • NET Math Libraries

    - by JoshReuben
    NET Mathematical Libraries   .NET Builder for Matlab The MathWorks Inc. - http://www.mathworks.com/products/netbuilder/ MATLAB Builder NE generates MATLAB based .NET and COM components royalty-free deployment creates the components by encrypting MATLAB functions and generating either a .NET or COM wrapper around them. .NET/Link for Mathematica www.wolfram.com a product that 2-way integrates Mathematica and Microsoft's .NET platform call .NET from Mathematica - use arbitrary .NET types directly from the Mathematica language. use and control the Mathematica kernel from a .NET program. turns Mathematica into a scripting shell to leverage the computational services of Mathematica. write custom front ends for Mathematica or use Mathematica as a computational engine for another program comes with full source code. Leverages MathLink - a Wolfram Research's protocol for sending data and commands back and forth between Mathematica and other programs. .NET/Link abstracts the low-level details of the MathLink C API. Extreme Optimization http://www.extremeoptimization.com/ a collection of general-purpose mathematical and statistical classes built for the.NET framework. It combines a math library, a vector and matrix library, and a statistics library in one package. download the trial of version 4.0 to try it out. Multi-core ready - Full support for Task Parallel Library features including cancellation. Broad base of algorithms covering a wide range of numerical techniques, including: linear algebra (BLAS and LAPACK routines), numerical analysis (integration and differentiation), equation solvers. Mathematics leverages parallelism using .NET 4.0's Task Parallel Library. Basic math: Complex numbers, 'special functions' like Gamma and Bessel functions, numerical differentiation. Solving equations: Solve equations in one variable, or solve systems of linear or nonlinear equations. Curve fitting: Linear and nonlinear curve fitting, cubic splines, polynomials, orthogonal polynomials. Optimization: find the minimum or maximum of a function in one or more variables, linear programming and mixed integer programming. Numerical integration: Compute integrals over finite or infinite intervals, over 2D and higher dimensional regions. Integrate systems of ordinary differential equations (ODE's). Fast Fourier Transforms: 1D and 2D FFT's using managed or fast native code (32 and 64 bit) BigInteger, BigRational, and BigFloat: Perform operations with arbitrary precision. Vector and Matrix Library Real and complex vectors and matrices. Single and double precision for elements. Structured matrix types: including triangular, symmetrical and band matrices. Sparse matrices. Matrix factorizations: LU decomposition, QR decomposition, singular value decomposition, Cholesky decomposition, eigenvalue decomposition. Portability and performance: Calculations can be done in 100% managed code, or in hand-optimized processor-specific native code (32 and 64 bit). Statistics Data manipulation: Sort and filter data, process missing values, remove outliers, etc. Supports .NET data binding. Statistical Models: Simple, multiple, nonlinear, logistic, Poisson regression. Generalized Linear Models. One and two-way ANOVA. Hypothesis Tests: 12 14 hypothesis tests, including the z-test, t-test, F-test, runs test, and more advanced tests, such as the Anderson-Darling test for normality, one and two-sample Kolmogorov-Smirnov test, and Levene's test for homogeneity of variances. Multivariate Statistics: K-means cluster analysis, hierarchical cluster analysis, principal component analysis (PCA), multivariate probability distributions. Statistical Distributions: 25 29 continuous and discrete statistical distributions, including uniform, Poisson, normal, lognormal, Weibull and Gumbel (extreme value) distributions. Random numbers: Random variates from any distribution, 4 high-quality random number generators, low discrepancy sequences, shufflers. New in version 4.0 (November, 2010) Support for .NET Framework Version 4.0 and Visual Studio 2010 TPL Parallellized – multicore ready sparse linear program solver - can solve problems with more than 1 million variables. Mixed integer linear programming using a branch and bound algorithm. special functions: hypergeometric, Riemann zeta, elliptic integrals, Frensel functions, Dawson's integral. Full set of window functions for FFT's. Product  Price Update subscription Single Developer License $999  $399  Team License (3 developers) $1999  $799  Department License (8 developers) $3999  $1599  Site License (Unlimited developers in one physical location) $7999  $3199    NMath http://www.centerspace.net .NET math and statistics libraries matrix and vector classes random number generators Fast Fourier Transforms (FFTs) numerical integration linear programming linear regression curve and surface fitting optimization hypothesis tests analysis of variance (ANOVA) probability distributions principal component analysis cluster analysis built on the Intel Math Kernel Library (MKL), which contains highly-optimized, extensively-threaded versions of BLAS (Basic Linear Algebra Subroutines) and LAPACK (Linear Algebra PACKage). Product  Price Update subscription Single Developer License $1295 $388 Team License (5 developers) $5180 $1554   DotNumerics http://www.dotnumerics.com/NumericalLibraries/Default.aspx free DotNumerics is a website dedicated to numerical computing for .NET that includes a C# Numerical Library for .NET containing algorithms for Linear Algebra, Differential Equations and Optimization problems. The Linear Algebra library includes CSLapack, CSBlas and CSEispack, ports from Fortran to C# of LAPACK, BLAS and EISPACK, respectively. Linear Algebra (CSLapack, CSBlas and CSEispack). Systems of linear equations, eigenvalue problems, least-squares solutions of linear systems and singular value problems. Differential Equations. Initial-value problem for nonstiff and stiff ordinary differential equations ODEs (explicit Runge-Kutta, implicit Runge-Kutta, Gear's BDF and Adams-Moulton). Optimization. Unconstrained and bounded constrained optimization of multivariate functions (L-BFGS-B, Truncated Newton and Simplex methods).   Math.NET Numerics http://numerics.mathdotnet.com/ free an open source numerical library - includes special functions, linear algebra, probability models, random numbers, interpolation, integral transforms. A merger of dnAnalytics with Math.NET Iridium in addition to a purely managed implementation will also support native hardware optimization. constants & special functions complex type support real and complex, dense and sparse linear algebra (with LU, QR, eigenvalues, ... decompositions) non-uniform probability distributions, multivariate distributions, sample generation alternative uniform random number generators descriptive statistics, including order statistics various interpolation methods, including barycentric approaches and splines numerical function integration (quadrature) routines integral transforms, like fourier transform (FFT) with arbitrary lengths support, and hartley spectral-space aware sequence manipulation (signal processing) combinatorics, polynomials, quaternions, basic number theory. parallelized where appropriate, to leverage multi-core and multi-processor systems fully managed or (if available) using native libraries (Intel MKL, ACMS, CUDA, FFTW) provides a native facade for F# developers

    Read the article

  • why Haskell can deduce [] type in this function

    - by Sili
    rho x = map (((flip mod) x).(\a -> a^2-1)) (rho x) This function will generate an infinite list. And I tested in GHCi, the function type is *Main> :t rho rho :: Integral b => b -> [b] If I define a function like this fun x = ((flip mod) x).(\a -> a^2-1) The type is *Main> :t fun fun :: Integral c => c -> c -> c My question is, how can Haskell deduce the function type to b - [b]? We don't have any [] type data in this function. Thanks!

    Read the article

  • Website Health Check - Keyword Blunders - Part 2

    Website Health Check is becoming an integral part of Search Engine Optimization (SEO). The reason is that it helps to find mistakes in websites that are commonly unnoticed. Eventually, it means the difference between coming up as the first or last result in a search engine query. A small part regarding keywords, such as keyword density, keyword stuffing, spelling errors, cloaking, etc., is explained here.

    Read the article

  • Important Differences Between SEO and SEM

    Though both may be integral parts of the Search Engine cosmos, there still are huge differences between SEO and SEM in terms of features and the way in which they get implemented. Many have said that SEO India is a part or a division of SEM India. SEO envelopes factors such as meta tags, keywords and their density, titles and HTML coding where as SEM encompasses factors such as search engine submissions, directory submissions, paid inclusions and certain others.

    Read the article

  • Top Three Advantages of Using Link Building Services

    Link building services are an integral part of internet marketing strategies for any website. It is one of the top methods of directing quality web traffic. It can be done by anyone who knows anything about internet marketing however experts in the field are able to optimize the process which gives the best results in the shortest amount of time.

    Read the article

  • SEO Can Really Help Your Company

    As the corporate world becomes ever more information hungry, SEO has become an integral part of marketing strategy. Most companies know that having a website is good for business but few understand how to make it work properly for them. Engaging a reputable SEO firm to do the work for them is the best way forward because if you cannot be found when potential customers are looking for you, you will not have an effective presence online.

    Read the article

  • The Use of Article Keywords For Search Engine Optimization

    Keywords are an integral part of the process, or algorithm, that search engines use to categorize all information held on the world wide web. Therefore in internet marketing terms, they can actually be compared to bait in fishing. It is something that is put in place in order to bring about a particular result.

    Read the article

  • Free Practical SEO

    Effective SEO is achieved by having a complete knowledge of the integral workings of the internet and expert knowledge of how to process this information and put it in place. There are a number of key factors that contribute to the success of your website, most notably Quality & Relevance and Popularity.

    Read the article

  • SEO Strategy - Building One Way Links

    One way links are integral to search engine optimization (SEO). They tell the search engines, "Hey this website is interesting!" Now that you know developing links is important, how do you go about doing it? There are several different ways to go about link building. No one link building scheme will get a blog or website listed high enough in the Google rankings to earn money. You must diversify and use many different approaches to obtain one way links.

    Read the article

  • Working With a Web Design Company

    Web designing and web development have become an integral part of each and every business today. If you are a business owner and are serious about staying ahead in the competition, you must consider online advertising and promotions. This will require you to work with a good web design company. There are a huge number of advantages and benefits associated with promoting a business online.

    Read the article

  • Most Effective Link Building Techniques

    It is a well known fact that Links are one of the integral parts of building and promoting a website with any of the search engines, yet its use has been ineffective for many and misunderstood by most. What are the most effective link building techniques? Which link building methods help in increasing your websites PR? Read and find our how to be smart with link building.

    Read the article

  • Can a conforming C implementation #define NULL to be something wacky

    - by janks
    I'm asking because of the discussion that's been provoked in this thread: http://stackoverflow.com/questions/2597142/when-was-the-null-macro-not-0/2597232 Trying to have a serious back-and-forth discussion using comments under other people's replies is not easy or fun. So I'd like to hear what our C experts think without being restricted to 500 characters at a time. The C standard has precious few words to say about NULL and null pointer constants. There's only two relevant sections that I can find. First: 3.2.2.3 Pointers An integral constant expression with the value 0, or such an expression cast to type void * , is called a null pointer constant. If a null pointer constant is assigned to or compared for equality to a pointer, the constant is converted to a pointer of that type. Such a pointer, called a null pointer, is guaranteed to compare unequal to a pointer to any object or function. and second: 4.1.5 Common definitions <stddef.h> The macros are NULL which expands to an implementation-defined null pointer constant; The question is, can NULL expand to an implementation-defined null pointer constant that is different from the ones enumerated in 3.2.2.3? In particular, could it be defined as: #define NULL __builtin_magic_null_pointer Or even: #define NULL ((void*)-1) My reading of 3.2.2.3 is that it specifies that an integral constant expression of 0, and an integral constant expression of 0 cast to type void* must be among the forms of null pointer constant that the implementation recognizes, but that it isn't meant to be an exhaustive list. I believe that the implementation is free to recognize other source constructs as null pointer constants, so long as no other rules are broken. So for example, it is provable that #define NULL (-1) is not a legal definition, because in if (NULL) do_stuff(); do_stuff() must not be called, whereas with if (-1) do_stuff(); do_stuff() must be called; since they are equivalent, this cannot be a legal definition of NULL. But the standard says that integer-to-pointer conversions (and vice-versa) are implementation-defined, therefore it could define the conversion of -1 to a pointer as a conversion that produces a null pointer. In which case if ((void*)-1) would evaluate to false, and all would be well. So what do other people think? I'd ask for everybody to especially keep in mind the "as-if" rule described in 2.1.2.3 Program execution. It's huge and somewhat roundabout, so I won't paste it here, but it essentially says that an implementation merely has to produce the same observable side-effects as are required of the abstract machine described by the standard. It says that any optimizations, transformations, or whatever else the compiler wants to do to your program are perfectly legal so long as the observable side-effects of the program aren't changed by them. So if you are looking to prove that a particular definition of NULL cannot be legal, you'll need to come up with a program that can prove it. Either one like mine that blatantly breaks other clauses in the standard, or one that can legally detect whatever magic the compiler has to do to make the strange NULL definition work. Steve Jessop found an example of way for a program to detect that NULL isn't defined to be one of the two forms of null pointer constants in 3.2.2.3, which is to stringize the constant: #define stringize_helper(x) #x #define stringize(x) stringize_helper(x) Using this macro, one could puts(stringize(NULL)); and "detect" that NULL does not expand to one of the forms in 3.2.2.3. Is that enough to render other definitions illegal? I just don't know. Thanks!

    Read the article

  • Use QuickCheck by generating primes

    - by Dan
    Background For fun, I'm trying to write a property for quick-check that can test the basic idea behind cryptography with RSA. Choose two distinct primes, p and q. Let N = p*q e is some number relatively prime to (p-1)(q-1) (in practice, e is usually 3 for fast encoding) d is the modular inverse of e modulo (p-1)(q-1) For all x such that 1 < x < N, it is always true that (x^e)^d = x modulo N In other words, x is the "message", raising it to the eth power mod N is the act of "encoding" the message, and raising the encoded message to the dth power mod N is the act of "decoding" it. (The property is also trivially true for x = 1, a case which is its own encryption) Code Here are the methods I have coded up so far: import Test.QuickCheck -- modular exponentiation modExp :: Integral a => a -> a -> a -> a modExp y z n = modExp' (y `mod` n) z `mod` n where modExp' y z | z == 0 = 1 | even z = modExp (y*y) (z `div` 2) n | odd z = (modExp (y*y) (z `div` 2) n) * y -- relatively prime rPrime :: Integral a => a -> a -> Bool rPrime a b = gcd a b == 1 -- multiplicative inverse (modular) mInverse :: Integral a => a -> a -> a mInverse 1 _ = 1 mInverse x y = (n * y + 1) `div` x where n = x - mInverse (y `mod` x) x -- just a quick way to test for primality n `divides` x = x `mod` n == 0 primes = 2:filter isPrime [3..] isPrime x = null . filter (`divides` x) $ takeWhile (\y -> y*y <= x) primes -- the property prop_rsa (p,q,x) = isPrime p && isPrime q && p /= q && x > 1 && x < n && rPrime e t ==> x == (x `powModN` e) `powModN` d where e = 3 n = p*q t = (p-1)*(q-1) d = mInverse e t a `powModN` b = modExp a b n (Thanks, google and random blog, for the implementation of modular multiplicative inverse) Question The problem should be obvious: there are way too many conditions on the property to make it at all usable. Trying to invoke quickCheck prop_rsa in ghci made my terminal hang. So I've poked around the QuickCheck manual a bit, and it says: Properties may take the form forAll <generator> $ \<pattern> -> <property> How do I make a <generator> for prime numbers? Or with the other constraints, so that quickCheck doesn't have to sift through a bunch of failed conditions? Any other general advice (especially regarding QuickCheck) is welcome.

    Read the article

  • compact Number formatting behavior in Java (automatically switch between decimal and scientific notation)

    - by kostmo
    I am looking for a way to format a floating point number dynamically in either standard decimal format or scientific notation, depending on the value of the number. For moderate magnitudes, the number should be formatted as a decimal with trailing zeros suppressed. If the floating point number is equal to an integral value, the decimal point should also be suppressed. For extreme magnitudes (very small or very large), the number should be expressed in scientific notation. Alternately stated, if the number of characters in the expression as standard decimal notation exceeds a certain threshold, switch to scientific notation. I should have control over the maximum number of digits of precision, but I don't want trailing zeros appended to express the minimum precision; all trailing zeros should be suppressed. Basically, it should optimize for compactness and readability. 2.80000 - 2.8 765.000000 - 765 0.0073943162953 - 0.00739432 (limit digits of precision—to 6 in this case) 0.0000073943162953 - 7.39432E-6 (switch to scientific notation if the magnitude is small enough—less than 1E-5 in this case) 7394316295300000 - 7.39432E+6 (switch to scientific notation if the magnitude is large enough—for example, when greater than 1E+10) 0.0000073900000000 - 7.39E-6 (strip trailing zeros from significand in scientific notation) 0.000007299998344 - 7.3E-6 (rounding from the 6-digit precision limit causes this number to have trailing zeros which are stripped) Here's what I've found so far: The .toString() method of the Number class does most of what I want, except it doesn't upconvert to integer representation when possible, and it will not express large integral magnitudes in scientific notation. Also, I'm not sure how to adjust the precision. The "%G" format string to the String.format(...) function allows me to express numbers in scientific notation with adjustable precision, but does not strip trailing zeros. I'm wondering if there's already some library function out there that meets these criteria. I guess the only stumbling block for writing this myself is having to strip the trailing zeros from the significand in scientific notation produced by %G.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9  | Next Page >