Search Results

Search found 6738 results on 270 pages for 'grid py'.

Page 114/270 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Record and Play your WebLogic Console Tasks Like a DVR

    - by james.bayer
    Automation using WebLogic Scripting Tool Today on the Oracle internal mailing list for WebLogic Server questions someone asked how to automate the configuration of the thread model for WebLogic Server and they were having trouble with the jython scripting syntax.  I’ve previously written about this feature called Work Managers and the associated constraints.  However, I did not show how to automate the process of configuring this without the console using WebLogic Scripting Tool – the jython scripting automation environment abbreviated as WLST.  I’ve written some very basic introductions to WLST before and there is also an Oracle By Example on the subject, but this is a bit more advanced.  Fear not because there is a really easy-to-use feature of the WLS console that lets you “Record” user actions just like a DVR.  Using these recordings of the web-based console, you can easily create a script even if you are unfamiliar with the WLST syntax and API.  I’m a big fan of both DVR’s and automation as can be evidenced with this old Halloween picture taken during simpler times.  Obviously the Cast Away and The Big Labowski references show some age.  I was a big Tivo fan-boy back in the day and I still think it’s the best DVR. I strongly believe that WebLogic Scripting Tool (WLST) is an absolutely essential tool for automating administration tasks in anything beyond a development environment.  Even in development environments you can make a case that it makes sense to start the automation for environments downstream.  I promise you that once you start using it for any tasks that you do even semi-regularly, you won’t go back to clicking through the console.  It’s simply so much more efficient and less error-prone to run a script. Let’s say you need to create a Work Manager and MaxThreadsConstraint – the easy way to do it is configure it in the WLS console first while capturing the commands with a recording.  See the images for the simple steps – click to enlarge. Record Console Configurations to a File Review the Recordings and Make Slight Modifications In order to make the recorded .py file directly callable as a stand-alone script I added calls to the connect() and edit() functions at the beginning and calls to disconnect() and exit() at the end – otherwise the main section of the script was provided by the console recording.  Below is the resulting file I saved as d:/temp/wm.py connect('weblogic','welcome1', 't3://localhost:7001') edit() startEdit()   cd('/SelfTuning/wl_server') cmo.createMaxThreadsConstraint('MaxThreadsConstraint-0')   cd('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName)) cmo.setCount(5) cmo.unSet('ConnectionPoolName')   cd('/SelfTuning/wl_server') cmo.createWorkManager('WorkManager-0') cd('/SelfTuning/wl_server/WorkManagers/WorkManager-0') set('Targets',jarray.array([ObjectName('com.bea:Name=examplesServer,Type=Server')], ObjectName))   cmo.setMaxThreadsConstraint(getMBean('/SelfTuning/wl_server/MaxThreadsConstraints/MaxThreadsConstraint-0')) cmo.setIgnoreStuckThreads(false)   activate() disconnect() exit() Run the Script If you want to test it be sure to delete the Work Manager and MaxThreadConstraint that you had previously created in the console.  Do something like the following - set up the environment and tell WLST to execute the script which happens in the first 2 lines, the rest doesn’t require any user input: D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server\bin>setDomainEnv.cmd D:\Oracle\wls11g\wlserver_10.3\samples\domains\wl_server>java weblogic.WLST d:\temp\wm.py   Initializing WebLogic Scripting Tool (WLST) ...   Welcome to WebLogic Server Administration Scripting Shell   Type help() for help on available commands   Connecting to t3://localhost:7001 with userid weblogic ... Successfully connected to Admin Server 'examplesServer' that belongs to domain 'wl_server'.   Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead.   Location changed to edit tree. This is a writable tree with DomainMBean as the root. To make changes you will need to start an edit session via startEdit().   For more help, use help(edit)   Starting an edit session ... Started edit session, please be sure to save and activate your changes once you are done. Activating all your changes, this may take a while ... The edit lock associated with this edit session is released once the activation is completed. Activation completed Disconnected from weblogic server: examplesServer     Exiting WebLogic Scripting Tool.   Now if you go back and look in the console the changes have been made and we now have a compete script.  Of course there is a full MBean reference and you can learn the nuances of jython and WLST, but why not the WLS console do most of the work for you!  Happy scripting.

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • How do I use setFilmSize in panda3d to achieve the correct view?

    - by lhk
    I'm working with Panda3d and recently switched my game to isometric rendering. I moved the virtual camera accordingly and set an orthographic lens. Then I implemented the classes "Map" and "Canvas". A canvas is a dynamically generated mesh: a flat quad. I'm using it to render the ingame graphics. Since the game itself is still set in a 3d coordinate system I'm planning to rely on these canvases to draw sprites. I could have named this class "Tile" but as I'd like to use it for non-tile sketches (enemies, environment) as well I thought canvas would describe it's function better. Map does exactly what it's name suggests. Its constructor receives the number of rows and columns and then creates a standard isometric map. It uses the canvas class for tiles. I'm planning to write a map importer that reads a file to create maps on the fly. Here's the canvas implementation: class Canvas: def __init__(self, texture, vertical=False, width=1,height=1): # create the mesh format=GeomVertexFormat.getV3t2() format = GeomVertexFormat.registerFormat(format) vdata=GeomVertexData("node-vertices", format, Geom.UHStatic) vertex = GeomVertexWriter(vdata, 'vertex') texcoord = GeomVertexWriter(vdata, 'texcoord') # add the vertices for a flat quad vertex.addData3f(1, 0, 0) texcoord.addData2f(1, 0) vertex.addData3f(1, 1, 0) texcoord.addData2f(1, 1) vertex.addData3f(0, 1, 0) texcoord.addData2f(0, 1) vertex.addData3f(0, 0, 0) texcoord.addData2f(0, 0) prim = GeomTriangles(Geom.UHStatic) prim.addVertices(0, 1, 2) prim.addVertices(2, 3, 0) self.geom = Geom(vdata) self.geom.addPrimitive(prim) self.node = GeomNode('node') self.node.addGeom(self.geom) # this is the handle for the canvas self.nodePath=NodePath(self.node) self.nodePath.setSx(width) self.nodePath.setSy(height) if vertical: self.nodePath.setP(90) # the most important part: "Drawing" the image self.texture=loader.loadTexture(""+texture+".png") self.nodePath.setTexture(self.texture) Now the code for the Map class class Map: def __init__(self,rows,columns,size): self.grid=[] for i in range(rows): self.grid.append([]) for j in range(columns): # create a canvas for the tile. For testing the texture is preset tile=Canvas(texture="../assets/textures/flat_concrete",width=size,height=size) x=(i-1)*size y=(j-1)*size # set the tile up for rendering tile.nodePath.reparentTo(render) tile.nodePath.setX(x) tile.nodePath.setY(y) # and store it for later access self.grid[i].append(tile) And finally the usage def loadMap(self): self.map=Map(10, 10, 1) this function is called within the constructor of the World class. The instantiation of world is the entry point to the execution. The code is pretty straightforward and runs good. Sadly the output is not as expected: Please note: The problem is not the white rectangle, it's my player object. The problem is that although the map should have equal width and height it's stretched weirdly. With orthographic rendering I expected the map to be a perfect square. What did I do wrong ? UPDATE: I've changed the viewport. This is how I set up the orthographic camera: lens = OrthographicLens() lens.setFilmSize(40, 20) base.cam.node().setLens(lens) You can change the "aspect" by modifying the parameters of setFilmSize. I don't know exactly how they are related to window size and screen resolution but after testing a little the values above seem to work for me. Now everything is rendered correctly as long as I don't resize the window. Every change of the window's size as well as switching to fullscreen destroys the correct rendering. I know that implementing a listener for resize events is not in the scope of this question. However I wonder why I need to make the Film's height two times bigger than its width. My window is quadratic ! Can you tell me how to find out correct setting for the FilmSize ? UPDATE 2: I can imagine that it's hard to envision the behaviour of the game. At first glance the obvious solution is to pass the window's width and height in pixels to setFilmSize. There are two problems with that approach. The parameters for setFilmSize are ingame units. You'll get a way to big view if you pass the pixel size For some strange reason the image is distorted if you pass equal values for width and height. Here's the output for setFilmSize(800,800) You'll have to stress your eyes but you'll see what I mean

    Read the article

  • Memory is full with vertex buffer

    - by Christian Frantz
    I'm having a pretty strange problem that I didn't think I'd run into. I was able to store a 50x50 grid in one vertex buffer finally, in hopes of better performance. Before I had each cube have an individual vertex buffer and with 4 50x50 grids, this slowed down my game tremendously. But it still ran. With 4 50x50 grids with my new code, that's only 4 vertex buffers. With the 4 vertex buffers, I get a memory error. When I load the game with 1 grid, it takes forever to load and with my previous version, it started up right away. So I don't know if I'm storing chunks wrong or what but it stumped me -.- for (int x = 0; x < 50; x++) { for (int z = 0; z < 50; z++) { for (int y = 0; y <= map[x, z]; y++) { SetUpVertices(); SetUpIndices(); cubes.Add(new Cube(device, new Vector3(x, map[x, z] - y, z), grass)); } } } vertexBuffer = new VertexBuffer(device, typeof(VertexPositionTexture), vertices.Count(), BufferUsage.WriteOnly); vertexBuffer.SetData<VertexPositionTexture>(vertices.ToArray()); indexBuffer = new IndexBuffer(device, typeof(short), indices.Count(), BufferUsage.WriteOnly); indexBuffer.SetData(indices.ToArray()); Thats how theyre stored. The array I'm reading from is a byte array which defines the coordinates of my map. Now with my old version, I used the same loading from an array so that hasn't changed. The only difference is the one vertex buffer instead of 2500 for a 50x50 grid. cubes is just a normal list that holds all my cubes for the vertex buffer. Another thing that just came to mind would be my draw calls. If I'm setting an effect for each cube in my cube list, that's probably going to take a lot of memory. How can I avoid doing this? I need the foreach method to set my cubes to the right position foreach (Cube block in cube.cubes) { effect.VertexColorEnabled = false; effect.TextureEnabled = true; Matrix center = Matrix.CreateTranslation(new Vector3(-0.5f, -0.5f, -0.5f)); Matrix scale = Matrix.CreateScale(1f); Matrix translate = Matrix.CreateTranslation(block.cubePosition); effect.World = center * scale * translate; effect.View = cam.view; effect.Projection = cam.proj; effect.FogEnabled = false; effect.FogColor = Color.CornflowerBlue.ToVector3(); effect.FogStart = 1.0f; effect.FogEnd = 50.0f; cube.Draw(effect); noc++; }

    Read the article

  • Telerik RadGridView problem

    - by Polaris
    I am using Telerik RadGridView in my project. I want to show image in column. GridViewImageColumn col1 = new GridViewImageColumn(); col1.Width = 100; col1.DataMemberBinding = new Binding("id"); col1.Header = "PhotoByConverter"; col1.DataMemberBinding.Converter = new ThumbnailConverter(); grid.Columns.Add(col1); GridViewImageColumn col2 = new GridViewImageColumn(); col2.Width = 100; col2.DataMemberBinding = new Binding("firstName"); col2.Header = "Person name"; col2.DataMemberBinding.Converter = new ThumbnailConverter(); grid.Columns.Add(col2); Grid.ItemsSource=DataTable; First column not wokrs but second works fine. I use Converter for image shown below public class ThumbnailConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { IEnumerable<thumbNail> result = from n in thumbnails where n.personID == value.ToString() select n; if (result != null && result.First().thumbnail != null) { return result.First().thumbnail.file; } else { return null; } } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new Exception("The method or operation is not implemented."); } } I found by id thumbnail of person and set it like data for GridViewImageColumn. I checked with Debuger conveter works properly. I can't undesrtand why it doesn't work. Any ideas?

    Read the article

  • CSS window height problem with dynamic loaded css

    - by Michael Mao
    Hi all: Please go here and use username "admin" and password "endlesscomic" (without wrapper quotes) to see the web app demo. Basically what I am trying to do is to incrementally integrate my work to this web app, say, every nightly, for the client to check the progress. Also, he would like to see, at the very beginning, a mockup about the page layout. I am trying to use the 960 grid system to achieve this. So far, so good. Except one issue that when the "mockup.css" is loaded dynamically by jQuery, it "extends" the window to the bottom, something I do not wanna have... As an inexperienced web developer, I don't know which part is wrong. Below is my js: /* master.js */ $(document).ready(function() { $('#addDebugCss').click(function() { alertMessage('adding debug css...'); addCssToHead('./css/debug.css'); $('.grid-insider').css('opacity','0.5');//reset mockup background transparcy }); $('#addMockupCss').click(function() { alertMessage('adding mockup css...'); addCssToHead('./css/mockup.css'); $('.grid-insider').css('opacity','1');//set semi-background transparcy for mockup }); $('#resetCss').click(function() { alertMessage('rolling back to normal'); rollbackCss(new Array("./css/mockup.css", "./css/debug.css")); }); }); function alertMessage(msg) //TODO find a better modal prompt { alert(msg); } function addCssToHead(path_to_css) { $('<link rel="stylesheet" type="text/css" href="' + path_to_css + '" />').appendTo("head"); } function rollbackCss(set) { for(var i in set) { $('link[href="'+ set[i]+ '"]').remove(); } } Something should be added to the exteral mockup.css? Or something to change in my master.js? Thanks for any hints/suggestions in advance.

    Read the article

  • ExtJS: Combobox in EditorGridPanel not selecting the desired item (with test case)

    - by TomH
    I'm using ExtJS to create an EditorGridPanel with a combobox for an editor in a cell. The combobox in my EditorGridPanel that is not working as I'd expect it to. When the user types the first letter of an item in the drop down list, the combobox seems to ignore it and select the first item in the list. I can reproduce the error consistently and have put together a test case here: http://cluebucket.com/dev/testcase/testcase.html Load the page and reproduce the behavior by the following -- note that this is all done using the keyboard, no mouse clicks: Click 'Add Record' (A new row is added to the grid) enter text in the text field. TAB to the Priority field without selecting anything (None will remain selected) TAB out of the Priority field. (A new row is added to the grid) enter text and TAB to the Priority field TYPE v (Very High is selected) TAB out of the priority field (A new row is added to the grid) enter text and TAB to the Priority field Type v (None is selected, but Very High should have been) TAB out of the priority field Enter text and TAB to the priority field Type l ('el') (Low is selected) TAB out, enter text, TAB to priority Type l (None is selected) It appears that whenever the user attempts to select the same value that was selected in the previous row, the combobox selects None. Any ideas? The code is available at cluebucket.com/dev/testcase/js/testcase.js Thoughts/Pointers/Corrections are appreciated!! thanks tom

    Read the article

  • Jqgrid: navigation based on the selected row

    - by bsreekanth
    Hello, I was trying to enable navigation based on a selected row. So, the user select a row from jQgrid, and when press the show (is there a show button for the grid, I saw edit, add etc), it needs to go to a new page based on the url (part of the row). $(document).ready(function () { function getLink() { // var rowid = $("#customer_list").jqGrid('getGridParam', 'selrow'); var rowid = $("#customer_list").getGridParam('selrow'); var MyCellData = $("#customer_list").jqGrid('getCell', rowid, 'dataUrl'); return MyCellData; } $("#customer_list").jqGrid({ url:'mytestList', editurl:'jq_edit_test', datatype: "json", colNames:['Call Id','Title','dataUrl'], colModel:[ {name:'callId', width:80, search:false }, {name:'title', width:200, sortable:false }, {name:'dataUrl',hidden:true} ], rowNum:10, sortname:'lastUpdated', sortorder: 'desc', pager:'#customer_list_pager', viewrecords: true, gridview: true }).navGrid('#customer_list_pager', {add:true,edit:true,del:false,search:true,refresh:true}, {closeAfterEdit:true, afterSubmit:afterSubmitEvent }, // edit options {addCaption:'Create New something', afterSubmit:afterSubmitEvent, savekey:[true,13]}, // add options {afterSubmit:afterSubmitEvent} // delete options ); $("#customer_list").jqGrid('filterToolbar'); }); so, the url is passed for each row as dataUrl. I'm trying to read it and set to the button. Whn debug through firebug, the rowid was 223 (there were only 12 rows in the grid), and the cell value is empty. Currently the button is kept outside the grid, but it may better to be it part of the vavGrid thanks.

    Read the article

  • How to do server-side validation using Jqgrid?

    - by Eoghan
    Hi, I'm using jqgrid to display a list of sites and I want to do some server side validation when a site is added or edited. (Form editing rather than inline. Validation needs to be server side for various reasons I won't go into.) I thought the best way would be to check the data via an ajax request when the beforeSubmit event is triggered. However this only seems to work when I'm editing an existing row in the grid - the function isn't called when I add a new row. Have I got my beforeSubmit in the wrong place? Thanks for your help. $("#sites-grid").jqGrid({ url:'/json/sites', datatype: "json", mtype: 'GET', colNames:['Code', 'Name', 'Area', 'Cluster', 'Date Live', 'Status', 'Lat', 'Lng'], colModel :[ {name:'code', index:'code', width:80, align:'left', editable:true}, {name:'name', index:'name', width:250, align:'left', editrules:{required:true}, editable:true}, {name:'area', index:'area', width:60, align:'left', editable:true}, {name:'cluster_id', index:'cluster_id', width:80, align:'right', editrules:{required:true, integer:true}, editable:true, edittype:"select", editoptions:{value:"<?php echo $cluster_options; ?>"}}, {name:'estimated_live_date', index:'estimated_live_date', width:120, align:'left', editable:true, editrules:{required:true}, edittype:"select", editoptions:{value:"<?php echo $this->month_options; ?>"}}, {name:'status', index:'status', width:80, align:'left', editable:true, edittype:"select", editoptions:{value:"Live:Live;Plan:Plan;"}}, {name:'lat', index:'lat', width:140, align:'right', editrules:{required:true}, editable:true}, {name:'lng', index:'lng', width:140, align:'right', editrules:{required:true}, editable:true}, ], height: '300', pager: '#pager-sites', rowNum:30, rowList:[10,30,90], sortname: 'cluster_id', sortorder: 'desc', viewrecords: true, multiselect: false, caption: 'Sites', editurl: '/json/sites' }); $("#sites-grid").jqGrid('navGrid','#pager-sites',{edit:true,add:true,del:true, beforeSubmit : function(postdata, formid) { $.ajax({ url : 'json/validate-site/', data : postdata, dataType : 'json', type : 'post', success : function(data) { alert(data.message); return[data.result, data.message]; } }); }});

    Read the article

  • WPF Ignoring ProgressBar Style

    - by Nidonocu
    Following a problem in WPF 4 where the default ProgressBar template has an off-white highlight applied causing it to dull the foreground colour, I decided to extract the template and manually edit the values to white. After doing this, I put the template as a style in App.xaml for global use and the Visual Studio designer began to use it right away. However, when I run my app, the default style is still being applied: I then tried setting the style specifically using a key, then even having it as an inline style and finally as a inline template without a wrapping style for the specific progress bar and it is still ignored. What's going wrong? Current Code for progress bar: <ProgressBar Grid.Column="1" Grid.Row="2" Height="15" Width="355" Margin="0,0,0,7" IsIndeterminate="{Binding ProgressState, FallbackValue=False}" Visibility="{Binding ProgressVisibility, FallbackValue=Collapsed}" Value="{Binding ProgressValue, Mode=OneWay, FallbackValue=100}" Foreground="{Binding ProgressColor,FallbackValue=Red}"> <ProgressBar.Template> <ControlTemplate> <Grid Name="TemplateRoot" SnapsToDevicePixels="True"> <Rectangle RadiusX="2" RadiusY="2" Fill="{TemplateBinding Panel.Background}" /> ...

    Read the article

  • Android - Images from Assets folder in a GridView

    - by Saran
    Hi, I have been working on creating a Grid View of images, with images being present in the Assets folder. http://stackoverflow.com/questions/1933015/opening-an-image-file-inside-the-assets-folder link helped me with using the bitmap to read it. The code am currently having is: public View getView(final int position, View convertView, ViewGroup parent) { try { AssetManager am = mContext.getAssets(); String list[] = am.list(""); int count_files = imagelist.length; for(int i= 0;i<=count_files; i++) { BufferedInputStream buf = new BufferedInputStream(am.open(list[i])); Bitmap bitmap = BitmapFactory.decodeStream(buf); imageView.setImageBitmap(bitmap); buf.close(); } } catch (IOException e) { e.printStackTrace(); } } My application does read the image from the Assets folder, but it is not iterating through the cells in the grid view. All the cells of the grid view have a same image picked from the set of images. Can anyone tell me how to iterate through the cells and still have different images ? I have the above code in an ImageAdapter Class which extends the BaseAdapter class, and in my main class I am linking that with my gridview by: GridView gv =(GridView)findViewById(R.id.gridview); gv.setAdapter(new ImageAdapter(this, assetlist)); Thanks a lot for any help in advance, Saran

    Read the article

  • NHibernate, Databinding to DataGridView, Lazy Loading, and Session managment - need advice

    - by Tom Bushell
    My main application form (WinForms) has a DataGridView, that uses DataBinding and Fluent NHibernate to display data from a SQLite database. This form is open for the entire time the application is running. For performance reasons, I set the convention DefaultLazy.Always() for all DB access. So far, the only way I've found to make this work is to keep a Session (let's call it MainSession) open all the time for the main form, so NHibernate can lazy load new data as the user navigates with the grid. Another part of the application can run in the background, and Save to the DB. Currently, (after considerable struggle), my approach is to call MainSession.Disconnect(), create a disposable Session for each Save, and MainSession.Reconnect() after finishing the Save. Otherwise SQLite will throw "The database file is locked" exceptions. This seems to be working well so far, but past experience has made me nervous about keeping a session open for a long time (I ran into performance problems when I tried to use a single session for both Saves and Loads - the cache filled up, and bogged down everything - see http://stackoverflow.com/questions/2526675/commit-is-very-slow-in-my-nhibernate-sqlite-project). So, my question - is this a good approach, or am I looking at problems down the road? If it's a bad approach, what are the alternatives? I've considered opening and closing my main session whenever the user navigates with the grid, but it's not obvious to me how I would do that - hook every event from the grid that could possibly cause a lazy load? I have the nagging feeling that trying to manage my own sessions this way is fundamentally the wrong approach, but it's not obvious what the right one is.

    Read the article

  • WPF Constrain the resize of a canvas' child object to the dimensions of the canvas

    - by Scott
    Given the following XAML: <Window x:Class="AdornerTesting.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="500" Width="500" Loaded="Window_Loaded"> <Grid Name="grid"> <Canvas Name="canvas" Width="400" Height="400" Background="LightGoldenrodYellow"> <RichTextBox Name="richTextBox" Canvas.Top="10" Canvas.Left="10" BorderBrush="Black" BorderThickness="2" Width="200" Height="200" MaxWidth="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Canvas}},Path=ActualWidth}" MaxHeight="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type Canvas}},Path=ActualHeight}"/> </Canvas> </Grid> </Window> and a set of adorners being added to the RichTextBox in the Loaded event like so: AdornerLayer adornerLayer = AdornerLayer.GetAdornerLayer(richTextBox); adornerLayer.Add(new ResizeAdorner(richTextBox)); How do I keep from being able to resize the RichTextBox beyond the visble bounds of the Canvas? The ResizeAdorner is essentially the same code that can be found in the MSDN adorner example and it works just fine. Should I be doing something with the bindings of MaxWidth and MaxHeight in the code-behind to calculate how the RichTextBox can be resized? Or is there a way to do this in XAML?

    Read the article

  • ExtJS: Autoscroll vertical FormPanels added to panel

    - by Sarge
    Hi All, I'm writing an app where I have a BorderLayout for the entire page. In the south part I have a Panel to which I add FormPanels. I would like to be able to scroll that Panel so I can scroll through the FormPanels. So far, nothing I've found from searches has helped. I don't quite understand what ExtJS requires in terms of the combination of LayoutManagers, setting the size and setting AutoScroll. Any partial tips will be gratefully followed up. A code snippet: createTrailJunctionPanel = function(trailJunction) { var trailJunctionPanel = new Ext.form.FormPanel({ labelWidth: 75, width: 350, defaultType: 'textfield', items: [{ fieldLabel: 'Junction Name', name: 'junction-name' }], autoScroll:true, //anchor: '100% 100%', height: 100 }); matchedTrailJunctionsPanel.add(trailJunctionPanel); if(trailJunction.publicTrailSegments.length == 0) { matchedTrailJunctionsPanel.add(new Ext.form.Label({text: 'No public trails matched'})); } else { var grid = new Ext.grid.GridPanel({ store: mapMatchObjectStore, columns: [ {id:'publicTrailSegment',header: 'Trail', width: 160, sortable: true, dataIndex: 'publicTrailSegment'} ], stripeRows: true, autoExpandColumn: 'publicTrailSegment', height: 350, width: 600, title: 'Matched Trail Junctions' }); matchedTrailJunctionsPanel.add(grid); } matchedTrailJunctionsPanel.doLayout(); } matchedTrailJunctionsPanel = new Ext.Panel({ title: "Matched Trail Junctions2", id: "matched-trail-junctions", autoScroll:true //layout: 'anchor' });

    Read the article

  • pyqt QTreeWidget setItemWidget dissapears after drag/drop

    - by mleep
    I'm trying to keep a widget put into a QTreeWidgetItem after a reparent (drag and drop) using QTreeWidget.setItemWidget() But the result, if you compile the following code - is that the widget inside the QTreeWidgetItem disappears. Any idea why? What code would fix this (repopulate the QTreeWidgetItem with the widget I guess?) from PyQt4.QtCore import * from PyQt4.QtGui import * class InlineEditor (QWidget): _MUTE = 'MUTE' def __init__ (self, parent): QWidget.__init__ (self, parent) self.setAutoFillBackground (True) lo = QHBoxLayout() lo.setSpacing(4) self._cbFoo = QComboBox() for x in ["ABC", "DEF", "GHI", "JKL"]: self._cbFoo.addItem(x) self._leBar = QLineEdit('', self) lo.addWidget (self._cbFoo, 3) lo.addSpacing (5) lo.addWidget (QLabel ( 'Bar:')) lo.addWidget (self._leBar, 3) lo.addStretch (5) self.setLayout (lo) class Form (QDialog): def __init__(self,parent=None): QDialog.__init__(self, parent) grid = QGridLayout () tree = QTreeWidget () # Here is the issue? tree.setDragDropMode(QAbstractItemView.InternalMove) tree.setColumnCount(3) for n in range (2): i = QTreeWidgetItem (tree) # create QTreeWidget the sub i i.setText (0, "first" + str (n)) # set the text of the first 0 i.setText (1, "second") for m in range (2): j = QTreeWidgetItem(i) j.setText (0, "child first" + str (m)) #b1 = QCheckBox("push me 0", tree) # this wont work w/ drag by itself either #tree.setItemWidget (tree.topLevelItem(0).child(1), 1, b1) item = InlineEditor(tree) # deal with a combination of multiple controls tree.setItemWidget(tree.topLevelItem(0).child(1), 1, item) grid.addWidget (tree) self.setLayout (grid) app = QApplication ([]) form = Form () form.show () app.exec_ ()

    Read the article

  • jqGrid > datatype : "local" > get and save (at once) all edited cells of a column ?

    - by Qualliarys
    Hi, i have a grid with a datatype = "local". The data are an array as follows : var mydata = [{id:1,valeur:"a_value",designation:"a_designation"}, {id:2,...}, ...]; The second column (named valeur) is the only editable column of the grid (editable:true set in colModel) In the pager of the grid, i have 2 buttons. One to edit all cells (at once) of the column named valeur: $("#mygrid").jqGrid('navButtonAdd','#pager',{caption:"Edit values", onClickButton:function(){ var ids = $('#mygrid').jqGrid('getDataIDs'); for(var i=0;i<ids.length+1;i++){ $('#mygrid').jqGrid('editRow',ids[i],true);} }}); and another one to save (at once) all the changes of the edited cells: $("#mygrid").jqGrid('navButtonAdd','#pager',{caption:"Save changes", onClickButton:function(){var ids = $('#mygrid').jqGrid('getDataIDs'); for(var i=0;i<ids.length+1;i++){ ... ??? ... }}}); when i use : var rd = $("#mygrid").jqGrid('getRowData',ids[i]); alert("valeur="+rd.valeur); for each display i get something like that: valeur=< input class="editable" role="textbox" name="valeur" id="1_valeur" style="width: 98%;" type="text" ... So, when cells are in edition mode, all rd.valeur are an input text tag ! when they are not, i get the initials values of the cells ! How to get and save all changes of this column (all cells in edition mode)?

    Read the article

  • Understanding ItemsSource and DataContext in a DataGrid

    - by Ben McCormack
    I'm trying to understand how the ItemsSource and DataContext properties work in a Silverlight Toolkit DataGrid. I'm currently working with dummy data and trying to get the data in the DataGrid to update when the value of a combo box changes. My MainPage.xaml.vb file currently looks like this: Partial Public Class MainPage Inherits UserControl Private IssueSummaryList As List(Of IssueSummary) Public Sub New() GetDummyIssueSummary("Day") InitializeComponent() dgIssueSummary.ItemsSource = IssueSummaryList 'dgIssueSummary.DataContext = IssueSummaryList ' End Sub Private Sub GetDummyIssueSummary(ByVal timeInterval As String) Dim lst As New List(Of IssueSummary)() 'Generate dummy data for lst ' IssueSummaryList = lst End Sub Private Sub ComboBox_SelectionChanged(ByVal sender As System.Object, ByVal e As System.Windows.Controls.SelectionChangedEventArgs) Dim cboBox As ComboBox = CType(sender, ComboBox) Dim cboBoxItem As ComboBoxItem = CType(cboBox.SelectedValue, ComboBoxItem) GetDummyIssueSummary(cboBoxItem.Content.ToString()) End Sub End Class My XAML currently looks this for the DataGrid: <sdk:DataGrid x:Name="dgIssueSummary" AutoGenerateColumns="False" > <sdk:DataGrid.Columns> <sdk:DataGridTextColumn Binding="{Binding ProblemType}" Header="Problem Type"/> <sdk:DataGridTextColumn Binding="{Binding Count}" Header="Count"/> </sdk:DataGrid.Columns> </sdk:DataGrid> The problem is that if I set the value of the ItemsSource property of the data grid equal to IssueSummaryList, it will display the data when it loads, but it won't update when the underlying IssueSummaryList collection changes. If I set the DataContext of the grid to be IssueSummaryList, no data will be displayed when it renders. I must not understand how ItemsSource and DataContext are supposed to function, because I expect one of those properties to "just work" when I assign a List object to them. What do I need to understand and change in my code so that as data changes in the List, the data in the grid is updated?

    Read the article

  • WPF DataGridTemplateColumn. Am I missing something?

    - by plotnick
    <data:DataGridTemplateColumn Header="Name"> <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock Text="{Binding Name}"> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> <data:DataGridTemplateColumn.CellEditingTemplate> <DataTemplate> <TextBox Text="{Binding Name}"> </DataTemplate> </data:DataGridTemplateColumn.CellEditingTemplate> </data:DataGridTemplateColumn> It's clear example of Template column, right? What could be wrong with that? So, here is the thing - when a user navigates through DataGrid with hitting TAB-key, it needs to hit the TAB twice(!) to be able to edit text in TextBox. How could I make it editable as soon as the user gets the column focus, I mean even if he just starts typing? Ok. I found a way - into Grid.KeyUp() I put the code below: if (Grid.CurrentColumn.Header.ToString() == "UserName") { if (e.Key != Key.Escape) { Grid.BeginEdit(); // Simply send another TAB press if (Keyboard.FocusedElement is Microsoft.Windows.Controls.DataGridCell) { var keyEvt = new KeyEventArgs(Keyboard.PrimaryDevice, Keyboard.PrimaryDevice.ActiveSource, 0, Key.Tab) { RoutedEvent = Keyboard.KeyDownEvent }; InputManager.Current.ProcessInput(keyEvt); } } }

    Read the article

  • WPF UserControl weird binding problem

    - by Heko
    Hello! Im usign a Ribbon Window and in the "content area beneath" I have a grid in which I will be displaying UserControls. To demonstrate my problem lets take a look at this simple UserControl: <ListView x:Name="lvPersonList"> <ListView.View> <GridView> <GridViewColumn Width="120" Header="Name" DisplayMemberBinding="{Binding Name}"/> <GridViewColumn Width="120" Header="Height" DisplayMemberBinding="{Binding Height}"/> </GridView> </ListView.View> </ListView> And the code: public partial class MyUserControl: UserControl { private List<Person> personList; public TestSnpList() { InitializeComponent(); this.personList = new List<Person>(); this.personList.Add(new Person { Name = "Chuck Norris", Height = 210 }); this.personList.Add(new Person { Name = "John Rambo", Height = 200 }); this.lvPersonList.ItemsSource = personList; } } public class Person { public string Name { get; set; } public int Height { get; set; } } The parent Window: <Grid x:Name="grdContent" DockPanel.Dock="Top"> <controls:MyUserControl x:Name="myUserControl" Visibility="Visible"/> </Grid> I don't understant why this binding doesn't work. Instead of values (Name and Height) I get full class names. If I use this code in a Window it works fine. Any ideas? I would like this user contorl works for itself (it gets the data form the DB and represents it in a ListView)... Thanks!

    Read the article

  • Android: Creating a Scrollable Layout

    - by MD
    I'm trying to create a "scrollable" layout in Android. Even using developers.android.com, though, I feel a little bit lost at the moment. I'm somewhat new to Java, but not so much that I feel I should be having these issues--being new to Android is the bigger problem right now. The layout I'm trying to create should scroll in a sort of a "grid". I THINK what I'm looking for is the Gallery view, but I'm really lost as to how to implement it at the moment. I want it to "snap" to center the frame, like in the actual Gallery application. Essentially, if I had a photo gallery of 9 pictures, the idea is to scroll between them up/down AND side to side, in a 3x3 manner. Doesn't need to dynamically adjust, or anything like that, I just want a grid I can scroll through. I'm also not asking for anyone to give me explicit code for it--I'm trying to learn, more than anything. But pointing me in the right direction for helpful layout programming resources would be greatly appreciated, and confirming if it's a Gallery view I'm looking for would also be really helpful. EDIT: To clarify, the goal is to have ONE item on screen at a time. If you scroll between one item and the next, the previous one leaves the screen, and the new one snaps into place. So if it were a photo gallery, each spot on the grid would take up the entire screen size, approximately, and would be flung out of the viewable area when you slide across to the next photo, in either direction. (Photos are just an example for illustration purposes)

    Read the article

  • ASP.NET Timer Event

    - by K Ratnajyothi
    protected void SubmitButtonClicked(object sender, EventArgs e) { System.Timers.Timer timer = new System.Timers.Timer(); --- --- //line 1 get_datasource(); String message = "submitted."; ScriptManager.RegisterStartupScript(this.Page, this.GetType(), "popupAlert", "popupAlert(' " + message + " ');", true); timer.Interval = 30000; timer.Elapsed += new ElapsedEventHandler(timer_tick); // Only raise the event the first time Interval elapses. timer.AutoReset = false; timer.Enabled = true; } } protected void timer_tick(object sender, EventArgs e) { //line 2 get_datasource(); GridView2.DataBind(); } The problem is with the data in the grid view that is being displayed... since when get_datasource which is after line 1 is called the updated data is displayed in the grid view since it is a postback event but when the timer event handler is calling the timer_tick event the get_datasource function is called but after that the updated data is not visible in the grid view. It is nnot getting updated since the timer_tick is not a post back event

    Read the article

  • like exec command in silverlight(save and load properties of Elements dynamically)

    - by Meysam Javadi
    i have some element in my container and want to save all properties of this elements. i list this element by VisualTreeHelper and save its attributes in DB, question is that how to retrieve this properties and affect them? i think that The Silverlight have some statement that behave like Exec in Sql-Server. i save properties in one line that delimited by semicolon.(if you have any suggestion ,appreciate) Edit: suppose this scenario: End-User choose a tool from Mytoolbox(a container like Grid) ,a dialog shown its properties for creation and finally draw Grid . in resumption he/she choose one element(like Button) and drop it on one of the grid's cell. now i want to save workspace that he/she created! My RootLayout have one container control so any of element are child of this.HERETOFORE i want create one string that contain all general properties(not all of them) and save to DB, and when i load this control, i create an element by the type that i saved and affect it by the properties that i saved; with something like EXEC command. is this possible ? have you new approach for this scenario(Guide me with example please).

    Read the article

  • GridViewColumn not subscribing to PropertyChanged event in a ListView

    - by Chris Wenham
    I have a ListView with a GridView that's bound to the properties of a class that implements INotifyPropertyChanged, like this: <ListView Name="SubscriptionView" Grid.Column="0" Grid.ColumnSpan="2" Grid.Row="2" ItemsSource="{Binding Path=Subscriptions}"> <ListView.View> <GridView> <GridViewColumn Width="24" CellTemplate="{StaticResource IncludeSubscriptionTemplate}"/> <GridViewColumn Width="150" DisplayMemberBinding="{Binding Path=Name}" Header="Subscription"/> <GridViewColumn Width="75" DisplayMemberBinding="{Binding Path=RecordsWritten}" Header="Records"/> <GridViewColumn Width="Auto" CellTemplate="{StaticResource FilenameTemplate}"/> </GridView> </ListView.View> </ListView> The class looks like this: public class Subscription : INotifyPropertyChanged { public int RecordsWritten { get { return _records; } set { _records = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("RecordsWritten")); } } private int _records; ... } So I fire up a BackgroundWorker and start writing records, updating the RecordsWritten property and expecting the value to change in the UI, but it doesn't. In fact, the value of PropertyChanged on the Subscription objects is null. This is a puzzler, because I thought WPF is supposed to subscribe to the PropertyChanged event of data objects that implement INotifyPropertyChanged. Am I doing something wrong here?

    Read the article

  • Animate UserControl in WPF?

    - by sanjeev40084
    I have two xaml file one is MainWindow.xaml and other is userControl EditTaskView.xaml. In MainWindow.xaml it consists of listbox and when double clicked any item of listbox, it displays other window (edit window) from EditView userControl. I am trying to animate this userControl everytime whenever any item from the listbox is double clicked. I added some animation in userControl however the animation only gets run once. How can i make my animation run everytime whenever any item from the listbox is clicked? <ListBox x:Name="lstBxTask" Style="{StaticResource ListBoxItems}" MouseDoubleClick="lstBxTask_MouseDoubleClick"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel> <Rectangle Style="{StaticResource LineBetweenListBox}"/> <StackPanel Orientation="Horizontal"> <TextBlock Text="{Binding Taskname}" Style="{StaticResource TextInListBox}"/> <Button Name="btnDelete" Style="{StaticResource DeleteButton}" Click="btnDelete_Click"/> </StackPanel> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> <ToDoTask:EditTaskView x:Name="EditTask" Grid.Row="1" Grid.RowSpan="2" Grid.ColumnSpan="2" Visibility="Collapsed"/> In the MainWindow code, there is mouse double click event, which changes the visibility of EditTaskView to Visible. Suggestions?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >