Search Results

Search found 943 results on 38 pages for 'np intermediate'.

Page 1/38 | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Can NP-Intermediate exist if P = NP?

    - by Jason Baker
    My understanding is that Ladner's theorem is basically this: P != NP implies that there exists a set NPI where NPI is not in P and NPI is not NP-complete What happens to this theorem if we assume that P = NP rather than P != NP? We know that if NP Intermediate doesn't exist, then P = NP. But can NP Intermediate exist if P = NP?

    Read the article

  • How is intermediate data organized in MapReduce?

    - by Pedro Cattori
    From what I understand, each mapper outputs an intermediate file. The intermediate data (data contained in each intermediate file) is then sorted by key. Then, a reducer is assigned a key by the master. The reducer reads from the intermediate file containing the key and then calls reduce using the data it has read. But in detail, how is the intermediate data organized? Can a data corresponding to a key be held in multiple intermediate files? What happens when there is too much data corresponding to one key to be held by a single file? In short, how do intermediate partitions differ from intermediate files and how are these differences dealt with in the implementation?

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • P-NP-Problem: What are the most promising methods?

    - by phimuemue
    Hello everybody, I know that P=NP has not been solved up to now, but can anybody tell me something about the following: What are currently the most promising mathematical / computer scientific methods that could be helpful to tackle this problem? Or are there even none such methods known to be potentially helpful up to now? Is there any (free) compendium on this topic where I can find all / most of the research done in this area?

    Read the article

  • P=NP?-Problem: What are the most promising methods?

    - by phimuemue
    Hello everybody, I know that P=NP has not been solved up to now, but can anybody tell me something about the following: What are currently the most promising mathematical / computer scientific methods that could be helpful to tackle this problem? Or are there even none such methods known to be potentially helpful up to now? Is there any (free) compendium on this topic where I can find all / most of the research done in this area?

    Read the article

  • Simple reduction (NP completeness)

    - by Allen
    hey guys I'm looking for a means to prove that the bicriteria shortest path problem is np complete. That is, given a graph with lengths and weights, I need to know if a there exists a path in the graph from s to t with total length <= L and weight <= W. I know that i must take an NP complete problem and reduce it to this one. We have at our disposal the following problems to choose from: 3-SAT, independent set, vertex cover, hamiltonian cycle, and 3-dimensional matching. Any ideas on which may be viable? thanks

    Read the article

  • The subsets-sum problem and the solvability of NP-complete problems

    - by G.E.M.
    I was reading about the subset-sums problem when I came up with what appears to be a general-purpose algorithm for solving it: (defun subset-contains-sum (set sum) (let ((subsets) (new-subset) (new-sum)) (dolist (element set) (dolist (subset-sum subsets) (setf new-subset (cons element (car subset-sum))) (setf new-sum (+ element (cdr subset-sum))) (if (= new-sum sum) (return-from subset-contains-sum new-subset)) (setf subsets (cons (cons new-subset new-sum) subsets))) (setf subsets (cons (cons element element) subsets))))) "set" is a list not containing duplicates and "sum" is the sum to search subsets for. "subsets" is a list of cons cells where the "car" is a subset list and the "cdr" is the sum of that subset. New subsets are created from old ones in O(1) time by just cons'ing the element to the front. I am not sure what the runtime complexity of it is, but appears that with each element "sum" grows by, the size of "subsets" doubles, plus one, so it appears to me to at least be quadratic. I am posting this because my impression before was that NP-complete problems tend to be intractable and that the best one can usually hope for is a heuristic, but this appears to be a general-purpose solution that will, assuming you have the CPU cycles, always give you the correct answer. How many other NP-complete problems can be solved like this one?

    Read the article

  • Is the board game "Go" NP complete?

    - by sharkin
    There are plenty of Chess AI's around, and evidently some are good enough to beat some of the world's greatest players. I've heard that many attempts have been made to write successful AI's for the board game Go, but so far nothing has been conceived beyond average amateur level. Could it be that the task of mathematically calculating the optimal move at any given time in Go is an NP-complete problem?

    Read the article

  • Proving that P <= NP

    - by Gail
    As most people know, P = NP is unproven and seems unlikely to be true. The proof would prove that P <= NP and NP <= P. Only one of those is hard, though. P <= NP is almost by definition true. In fact, that's the only way that I know how to state that P <= NP. It's just intuitive. How would you prove that P <= NP?

    Read the article

  • Solving the NP-complete problem in XKCD

    - by Adam Tuttle
    The problem/comic in question: http://xkcd.com/287/ I'm not sure this is the best way to do it, but here's what I've come up with so far. I'm using CFML, but it should be readable by anyone. <cffunction name="testCombo" returntype="boolean"> <cfargument name="currentCombo" type="string" required="true" /> <cfargument name="currentTotal" type="numeric" required="true" /> <cfargument name="apps" type="array" required="true" /> <cfset var a = 0 /> <cfset var found = false /> <cfloop from="1" to="#arrayLen(arguments.apps)#" index="a"> <cfset arguments.currentCombo = listAppend(arguments.currentCombo, arguments.apps[a].name) /> <cfset arguments.currentTotal = arguments.currentTotal + arguments.apps[a].cost /> <cfif arguments.currentTotal eq 15.05> <!--- print current combo ---> <cfoutput><strong>#arguments.currentCombo# = 15.05</strong></cfoutput><br /> <cfreturn true /> <cfelseif arguments.currentTotal gt 15.05> <cfoutput>#arguments.currentCombo# > 15.05 (aborting)</cfoutput><br /> <cfreturn false /> <cfelse> <!--- less than 15.05 ---> <cfoutput>#arguments.currentCombo# < 15.05 (traversing)</cfoutput><br /> <cfset found = testCombo(arguments.currentCombo, arguments.currentTotal, arguments.apps) /> </cfif> </cfloop> </cffunction> <cfset mf = {name="Mixed Fruit", cost=2.15} /> <cfset ff = {name="French Fries", cost=2.75} /> <cfset ss = {name="side salad", cost=3.35} /> <cfset hw = {name="hot wings", cost=3.55} /> <cfset ms = {name="moz sticks", cost=4.20} /> <cfset sp = {name="sampler plate", cost=5.80} /> <cfset apps = [ mf, ff, ss, hw, ms, sp ] /> <cfloop from="1" to="6" index="b"> <cfoutput>#testCombo(apps[b].name, apps[b].cost, apps)#</cfoutput> </cfloop> The above code tells me that the only combination that adds up to $15.05 is 7 orders of Mixed Fruit, and it takes 232 executions of my testCombo function to complete. Is there a better algorithm to come to the correct solution? Did I come to the correct solution?

    Read the article

  • NP-complete problem in Prolog

    - by Ashley
    I saw this ECLiPSe solution to the problem mentioned in this XKCD comic. I tried to convert this to pure Prolog. go:- Total = 1505, Prices = [215, 275, 335, 355, 420, 580], length(Prices, N), length(Amounts, N), totalCost(Prices, Amounts, 0, Total), writeln(Total). totalCost([], [], TotalSoFar, TotalSoFar). totalCost([P|Prices], [A|Amounts], TotalSoFar, EndTotal):- between(0, 10, A), Cost is P*A, TotalSoFar1 is TotalSoFar + Cost, totalCost(Prices, Amounts, TotalSoFar1, EndTotal). I don't think that this is the best / most declarative solution that one can come up with. Does anyone have any suggestions for improvement? Thanks in advance!

    Read the article

  • hibernate modeling relationships managed through an intermediate table

    - by shikarishambu
    I have a datamodel that has an intermediate table to manage relationships between entities. For example, tables Person and Organization are related through the Relationship table Party (table) - ID Person (table) - ID (references Party.ID) - name Organization (table) -ID (references Party.ID) -name Relationship (table) -ID (PK) -type (references relationshiptype lookup) -fromID (references Party.ID) -ToID (references Party.ID) -fromDate -ToDate Type+fromID+ToID+fromDate+ToDate is guaranteed to be unique. How do I manage this using hibernate? TIA

    Read the article

  • Java book for programmer with intermediate experience

    - by Anthony Forloney
    I am well aware of this previous question here and have heard great things about those books. I actually bought two of them, Effective Java and Java in a Nutshell as an early Christmas present. I am looking for a good Java book, or books, to further continue my understanding of Java from a recent college graduate point of view. I have learned Java in college and would consider it to be the language I am most comfortable with. Can anyone recommend a good "intermediate" Java book for a situation like mine? This question should be community, I just don't know how to migrate it. I apologize for any inconvenience. Thank you in advance.

    Read the article

  • Hibernate mapping to manage bidirectional relationship using intermediate table

    - by shikarishambu
    I have an object hierarchy that is as follows. Party inherited by Organization and Person Organization inherited by Customer, Vendor Person inherited by Contact In the database I have the following tables Party, Customer, Vendor, Contact. All of them have a corresponding row in Party table. Contact belongs to either Vendor or Customer. I have a field on the Contact table for org_party_id. However, since the organization can be either a Customer or Vendor I need to be able to look at different tables. Is there a way to map this in hibernate? Or, a better way to manage it in DB/ hibernate?

    Read the article

  • Do you keep intermediate files under version control?

    - by Subb
    Here's an example with a Flash project, but I'm sure a lot of projects are like this. Suppose I create an image with Photoshop. I then export this image as a jpeg for integration in Flash. I compile the fla as an asset library, which is then used in my Flash Builder project to produce the final swf. So it goes like : psd => jpg -> fla => swc -> Flash Builder project => swf. => : produce -> : is used in The psd, fla, and Flash Builder Project are source files : they are not the result of some process. The jpg and swc are what I would call "intermediate" files. They are the product of one (or more) source file(s). The swf is the final result. So, would you keep those intermediate files under version control? How do you deal with them?

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Migrate an intermediate CA to a new root

    - by Tim Brigham
    Using the Microsoft CA is there any way to cut over to a new certificate authority from an intermediate authority? Both my systems are Microsoft CAs - I have a 2008 R2 Enterprise CA (intermediate) and an old 2003 CA (root). The 2003 box bit the dust and I don't have good backups. I still have a few months before the CRL expires; instead of having to cut over to a new intermediate authority is there a ready way to simply point this intermediate authority to a new offline CA?

    Read the article

  • Reaching Intermediate Programming Status

    - by George Stocker
    I am a software engineer that's had positions programming in VBA (though I dare not consider that 'real' experience, as it was trial and error!), Perl w/ CGI, C#, and ASP.NET. The latter two are post-undergraduate, with my entrance into the 'real world'. I'm 2 years out of college, and have had 5 years of experience (total) across the languages I've mentioned. However, when it comes to my resume, I can only put 2 years down for C#, and less than a year down for ASP.NET. I feel like I know C#, but I still have to spend time going 'What does this method do?', whereas some of the more senior level engineers can immediately say, "Oh, Method X does this, without ever having looked at that method before." So I know empirically that there's a gulf there, but I'm not exactly sure how to bridge it. I've started programming in Project Euler, and I picked up a book on design patterns, but I still feel like I spend each day treading water, instead of moving forward. That isn't to say that I don't feel like I've made progress, it just means that as far as I come each day, I still see the mountain top way off in the distance. My question is this: How did you overcome this plateau? How long did it take you? What methods can you suggest to assist me in this? I've read through Code Complete, The Mythical Man Month, and CLR via C#, 2nd edition -- my question is: What do I do now? Edit: I just found this question on projects for an intermediate level programmer. I think it adds to the discussion (though it does not supplant my question). As such, I'm adding it to the question as a "For More Information".

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Temporary intermediate table

    - by user289429
    In our project to generate massive reports in oracle we use some permanent table to hold intermediate results. For example to generate one report we run few queries and populate the table, at the final step we join the intermediate table with huge application tables. These intermediate tables are cleared for next report run. We have few concerns in performance areas. These intermediate tables are transactional and don't have statistics. Is it good idea to join these with application tables which are partitioned and have up to date statistics. We need these results stored in the intermediate tables to be available across requests from UI hence we are not in a position to use oracle provided temporary tables. Any thoughts on what could be done would be appreciated.

    Read the article

1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >