Search Results

Search found 9215 results on 369 pages for 'double pointers'.

Page 345/369 | < Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >

  • Using scipy.interpolate.splrep function

    - by Koustav Ghosal
    I am trying to fit a cubic spline to a given set of points. My points are not ordered. I CANNOT sort or reorder the points, since I need that information. But since the function scipy.interpolate.splrep works only on non-duplicate and monotonically increasing points I have defined a function that maps the x-coordinates to a monotonically increasing space. My old points are: xpoints=[4913.0, 4912.0, 4914.0, 4913.0, 4913.0, 4913.0, 4914.0, 4915.0, 4918.0, 4921.0, 4925.0, 4932.0, 4938.0, 4945.0, 4950.0, 4954.0, 4955.0, 4957.0, 4956.0, 4953.0, 4949.0, 4943.0, 4933.0, 4921.0, 4911.0, 4898.0, 4886.0, 4874.0, 4865.0, 4858.0, 4853.0, 4849.0, 4848.0, 4849.0, 4851.0, 4858.0, 4864.0, 4869.0, 4877.0, 4884.0, 4893.0, 4903.0, 4913.0, 4923.0, 4935.0, 4947.0, 4959.0, 4970.0, 4981.0, 4991.0, 5000.0, 5005.0, 5010.0, 5015.0, 5019.0, 5020.0, 5021.0, 5023.0, 5025.0, 5027.0, 5027.0, 5028.0, 5028.0, 5030.0, 5031.0, 5033.0, 5035.0, 5037.0, 5040.0, 5043.0] ypoints=[10557.0, 10563.0, 10567.0, 10571.0, 10575.0, 10577.0, 10578.0, 10581.0, 10582.0, 10582.0, 10582.0, 10581.0, 10578.0, 10576.0, 10572.0, 10567.0, 10560.0, 10550.0, 10541.0, 10531.0, 10520.0, 10511.0, 10503.0, 10496.0, 10490.0, 10487.0, 10488.0, 10488.0, 10490.0, 10495.0, 10504.0, 10513.0, 10523.0, 10533.0, 10542.0, 10550.0, 10556.0, 10559.0, 10560.0, 10559.0, 10555.0, 10550.0, 10543.0, 10533.0, 10522.0, 10514.0, 10505.0, 10496.0, 10490.0, 10486.0, 10482.0, 10481.0, 10482.0, 10486.0, 10491.0, 10497.0, 10506.0, 10516.0, 10524.0, 10534.0, 10544.0, 10552.0, 10558.0, 10564.0, 10569.0, 10573.0, 10576.0, 10578.0, 10581.0, 10582.0] Plots: The code for the mapping function and interpolation is: xnew=[] ynew=ypoints for c3,i in enumerate(xpoints): if np.isfinite(np.log(i*pow(2,c3))): xnew.append(np.log(i*pow(2,c3))) else: if c==0: xnew.append(np.random.random_sample()) else: xnew.append(xnew[c3-1]+np.random.random_sample()) xnew=np.asarray(xnew) ynew=np.asarray(ynew) constant1=10.0 nknots=len(xnew)/constant1 idx_knots = (np.arange(1,len(xnew)-1,(len(xnew)-2)/np.double(nknots))).astype('int') knots = [xnew[i] for i in idx_knots] knots = np.asarray(knots) int_range=np.linspace(min(xnew),max(xnew),len(xnew)) tck = interpolate.splrep(xnew,ynew,k=3,task=-1,t=knots) y1= interpolate.splev(int_range,tck,der=0) The code is throwing an error at the function interpolate.splrep() for some set of points like the above one. The error is: File "/home/neeraj/Desktop/koustav/res/BOS5/fit_spline3.py", line 58, in save_spline_f tck = interpolate.splrep(xnew,ynew,k=3,task=-1,t=knots) File "/usr/lib/python2.7/dist-packages/scipy/interpolate/fitpack.py", line 465, in splrep raise _iermessier(_iermess[ier][0]) ValueError: Error on input data But for other set of points it works fine. For example for the following set of points. xpoints=[1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1629.0, 1630.0, 1630.0, 1630.0, 1631.0, 1631.0, 1631.0, 1631.0, 1630.0, 1629.0, 1629.0, 1629.0, 1628.0, 1627.0, 1627.0, 1625.0, 1624.0, 1624.0, 1623.0, 1620.0, 1618.0, 1617.0, 1616.0, 1615.0, 1614.0, 1614.0, 1612.0, 1612.0, 1612.0, 1611.0, 1610.0, 1609.0, 1608.0, 1607.0, 1607.0, 1603.0, 1602.0, 1602.0, 1601.0, 1601.0, 1600.0, 1599.0, 1598.0] ypoints=[10570.0, 10572.0, 10572.0, 10573.0, 10572.0, 10572.0, 10571.0, 10570.0, 10569.0, 10565.0, 10564.0, 10563.0, 10562.0, 10560.0, 10558.0, 10556.0, 10554.0, 10551.0, 10548.0, 10547.0, 10544.0, 10542.0, 10541.0, 10538.0, 10534.0, 10532.0, 10531.0, 10528.0, 10525.0, 10522.0, 10519.0, 10517.0, 10516.0, 10512.0, 10509.0, 10509.0, 10507.0, 10504.0, 10502.0, 10500.0, 10501.0, 10499.0, 10498.0, 10496.0, 10491.0, 10492.0, 10488.0, 10488.0, 10488.0, 10486.0, 10486.0, 10485.0, 10485.0, 10486.0, 10483.0, 10483.0, 10482.0, 10480.0] Plots: Can anybody suggest what's happening ?? Thanks in advance......

    Read the article

  • MySql multiple selects batching in .net

    - by Amith George
    I have a situation in my application. For each x-axis point in my chart, I am plotting 5 y-axis values. To calculate each of these 5 values, I need to make 4 different queries. Ie, for each x-axis point I need to fire 20 sql queries. Now, I need to plot 40 such points in the my chart. Its resulting in a pathetic performance where it takes close to a minute to get all the data back from the database. Each of 4 different queries consists of a join between 2 tables. One has only 6 rows. The other close to 10,000. Each of the 4 queries has different WHERE clauses, so they are different queries. For each point in the x-axis, only the values for the where clauses change. I have tried combining each of the 4 queries into one big string. Basically batch the four selects. These are again batched for each y-axis value. So, for each x-axis point, I am now firing one big command that consists of 20 different select statements. Technically, I should be experiencing a big performance boost, right? Instead of hitting the db 40x5x4 = 800 times, I am now hitting it just 40 times. But instead of taking 60 seconds, it taking 50-55 seconds... not much of a help. I am using MySql 5.1, and the 6.1 version of its .Net connector. What can I do to improve the performance? Edit: One of the 4 queries is as follows: SELECT SUM(TIME_TO_SEC(TIMEDIFF(T1.col2, T1.col1))* T2.col1 / (3600 *1000)) AS TotalTime FROM Table T1 JOIN Table T2 ON T1.col3 = T2.col3 WHERE T1.col4 = 'i' AND T1.col1 >= '2009-12-25 00:00:00' AND T1.col2 <= '2009-12-26 00:00:00'; The other 3 queries are similar, only the where clause changes slightly. This set of 4 queries is fired 5 times. The first 3 times against the join of table T1 and T2, passing in different values for col4. And the next two times against the join of table T3 and T2 passing in different values for col4. These 5 values are the y-axis values for a particular x-axis point. The data returned by all these queries is the same format. so, we tried doing a UNION ALL on all these queries. No substantial difference. One strange thing, however, after indexing the foreign key on the table T1 [while it contained over a lakh records], the queries were using the index, but they had become slower. At times, the queries would take double the time to return the data.

    Read the article

  • Stored proc running 30% slower through Java versus running directly on database

    - by James B
    Hi All, I'm using Java 1.6, JTDS 1.2.2 (also just tried 1.2.4 to no avail) and SQL Server 2005 to create a CallableStatement to run a stored procedure (with no parameters). I am seeing the Java wrapper running the same stored procedure 30% slower than using SQL Server Management Studio. I've run the MS SQL profiler and there is little difference in I/O between the two processes, so I don't think it's related to query plan caching. The stored proc takes no arguments and returns no data. It uses a server-side cursor to calculate the values that are needed to populate a table. I can't see how the calling a stored proc from Java should add a 30% overhead, surely it's just a pipe to the database that SQL is sent down and then the database executes it....Could the database be giving the Java app a different query plan?? I've posted to both the MSDN forums, and the sourceforge JTDS forums (topic: "stored proc slower in JTDS than direct in DB") I was wondering if anyone has any suggestions as to why this might be happening? Thanks in advance, -James (N.B. Fear not, I will collate any answers I get in other forums together here once I find the solution) Java code snippet: sLogger.info("Preparing call..."); stmt = mCon.prepareCall("SP_WB200_POPULATE_TABLE_limited_rows"); sLogger.info("Call prepared. Executing procedure..."); stmt.executeQuery(); sLogger.info("Procedure complete."); I have run sql profiler, and found the following: Java app : CPU: 466,514 Reads: 142,478,387 Writes: 284,078 Duration: 983,796 SSMS : CPU: 466,973 Reads: 142,440,401 Writes: 280,244 Duration: 769,851 (Both with DBCC DROPCLEANBUFFERS run prior to profiling, and both produce the correct number of rows) So my conclusion is that they both execute the same reads and writes, it's just that the way they are doing it is different, what do you guys think? It turns out that the query plans are significantly different for the different clients (the Java client is updating an index during an insert that isn't in the faster SQL client, also, the way it is executing joins is different (nested loops Vs. gather streams, nested loops Vs index scans, argh!)). Quite why this is, I don't know yet (I'll re-post when I do get to the bottom of it) Epilogue I couldn't get this to work properly. I tried homogenising the connection properties (arithabort, ansi_nulls etc) between the Java and Mgmt studio clients. It ended up the two different clients had very similar query/execution plans (but still with different actual plan_ids). I posted a summary of what I found to the MSDN SQL Server forums as I found differing performance not just between a JDBC client and management studio, but also between Microsoft's own command line client, SQLCMD, I also checked some more radical things like network traffic too, or wrapping the stored proc inside another stored proc, just for grins. I have a feeling the problem lies somewhere in the way the cursor was being executed, and it was somehow giving rise to the Java process being suspended, but why a different client should give rise to this different locking/waiting behaviour when nothing else is running and the same execution plan is in operation is a little beyond my skills (I'm no DBA!). As a result, I have decided that 4 days is enough of anyone's time to waste on something like this, so I will grudgingly code around it (if I'm honest, the stored procedure needed re-coding to be more incremental instead of re-calculating all data each week anyway), and chalk this one down to experience. I'll leave the question open, big thanks to everyone who put their hat in the ring, it was all useful, and if anyone comes up with anything further, I'd love to hear some more options...and if anyone finds this post as a result of seeing this behaviour in their own environments, then hopefully there's some pointers here that you can try yourself, and hope fully see further than we did. I'm ready for my weekend now! -James

    Read the article

  • How to use pthread_atfork() and pthread_once() to reinitialize mutexes in child processes

    - by Blair Zajac
    We have a C++ shared library that uses ZeroC's Ice library for RPC and unless we shut down Ice's runtime, we've observed child processes hanging on random mutexes. The Ice runtime starts threads, has many internal mutexes and keeps open file descriptors to servers. Additionally, we have a few of mutexes of our own to protect our internal state. Our shared library is used by hundreds of internal applications so we don't have control over when the process calls fork(), so we need a way to safely shutdown Ice and lock our mutexes while the process forks. Reading the POSIX standard on pthread_atfork() on handling mutexes and internal state: Alternatively, some libraries might have been able to supply just a child routine that reinitializes the mutexes in the library and all associated states to some known value (for example, what it was when the image was originally executed). This approach is not possible, though, because implementations are allowed to fail *_init() and *_destroy() calls for mutexes and locks if the mutex or lock is still locked. In this case, the child routine is not able to reinitialize the mutexes and locks. On Linux, the this test C program returns EPERM from pthread_mutex_unlock() in the child pthread_atfork() handler. Linux requires adding _NP to the PTHREAD_MUTEX_ERRORCHECK macro for it to compile. This program is linked from this good thread. Given that it's technically not safe or legal to unlock or destroy a mutex in the child, I'm thinking it's better to have pointers to mutexes and then have the child make new pthread_mutex_t on the heap and leave the parent's mutexes alone, thereby having a small memory leak. The only issue is how to reinitialize the state of the library and I'm thinking of reseting a pthread_once_t. Maybe because POSIX has an initializer for pthread_once_t that it can be reset to its initial state. #include <pthread.h> #include <stdlib.h> #include <string.h> static pthread_once_t once_control = PTHREAD_ONCE_INIT; static pthread_mutex_t *mutex_ptr = 0; static void setup_new_mutex() { mutex_ptr = malloc(sizeof(*mutex_ptr)); pthread_mutex_init(mutex_ptr, 0); } static void prepare() { pthread_mutex_lock(mutex_ptr); } static void parent() { pthread_mutex_unlock(mutex_ptr); } static void child() { // Reset the once control. pthread_once_t once = PTHREAD_ONCE_INIT; memcpy(&once_control, &once, sizeof(once_control)); setup_new_mutex(); } static void init() { setup_new_mutex(); pthread_atfork(&prepare, &parent, &child); } int my_library_call(int arg) { pthread_once(&once_control, &init); pthread_mutex_lock(mutex_ptr); // Do something here that requires the lock. int result = 2*arg; pthread_mutex_unlock(mutex_ptr); return result; } In the above sample in the child() I only reset the pthread_once_t by making a copy of a fresh pthread_once_t initialized with PTHREAD_ONCE_INIT. A new pthread_mutex_t is only created when the library function is invoked in the child process. This is hacky but maybe the best way of dealing with this skirting the standards. If the pthread_once_t contains a mutex then the system must have a way of initializing it from its PTHREAD_ONCE_INIT state. If it contains a pointer to a mutex allocated on the heap than it'll be forced to allocate a new one and set the address in the pthread_once_t. I'm hoping it doesn't use the address of the pthread_once_t for anything special which would defeat this. Searching comp.programming.threads group for pthread_atfork() shows a lot of good discussion and how little the POSIX standards really provides to solve this problem. There's also the issue that one should only call async-signal-safe functions from pthread_atfork() handlers, and it appears the most important one is the child handler, where only a memcpy() is done. Does this work? Is there a better way of dealing with the requirements of our shared library?

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • Help finding longest non-repeating path through connected nodes - Python

    - by Jordan Magnuson
    I've been working on this for a couple of days now without success. Basically, I have a bunch of nodes arranged in a 2D matrix. Every node has four neighbors, except for the nodes on the sides and corners of the matrix, which have 3 and 2 neighbors, respectively. Imagine a bunch of square cards laid out side by side in a rectangular area--the project is actually simulating a sort of card/board game. Each node may or may not be connected to the nodes around it. Each node has a function (get_connections()), that returns the nodes immediately around it that it is connected to (so anywhere from 0 to 4 nodes are returned). Each node also has an "index" property, that contains it's position on the board matrix (eg '1, 4' - row 1, col 4). What I am trying to do is find the longest non-repeating path of connected nodes given a particular "start" node. I've uploaded a couple of images that should give a good idea of what I'm trying to do: In both images, the highlighted red cards are supposedly the longest path of connected cards containing the most upper-left card. However, you can see in both images that a couple of cards that should be in the path have been left out (Romania and Maldova in the first image, Greece and Turkey in the second) Here's the recursive function that I am using currently to find the longest path, given a starting node/card: def get_longest_trip(self, board, processed_connections = list(), processed_countries = list()): #Append this country to the processed countries list, #so we don't re-double over it processed_countries.append(self) possible_trips = dict() if self.get_connections(board): for i, card in enumerate(self.get_connections(board)): if card not in processed_countries: processed_connections.append((self, card)) possible_trips[i] = card.get_longest_trip(board, processed_connections, processed_countries) if possible_trips: longest_trip = [] for i, trip in possible_trips.iteritems(): trip_length = len(trip) if trip_length > len(longest_trip): longest_trip = trip longest_trip.append(self) return longest_trip else: print card_list = [] card_list.append(self) return card_list else: #If no connections from start_card, just return the start card #as the longest trip card_list = [] card_list.append(board.start_card) return card_list The problem here has to do with the processed_countries list: if you look at my first screenshot, you can see that what has happened is that when Ukraine came around, it looked at its two possible choices for longest path (Maldova-Romania, or Turkey, Bulgaria), saw that they were both equal, and chose one indiscriminantly. Now when Hungary comes around, it can't attempt to make a path through Romania (where the longest path would actually be), because Romania has been added to the processed_countries list by Ukraine. Any help on this is EXTREMELY appreciated. If you can find me a solution to this, recursive or not, I'd be happy to donate some $$ to you. I've uploaded my full source code (Python 2.6, Pygame 1.9 required) to: http://www.necessarygames.com/junk/planes_trains.zip The relevant code is in src/main.py, which is all set to run.

    Read the article

  • ListView: convertView / holder getting confused

    - by Steve H
    I'm working with a ListView, trying to get the convertView / referenceHolder optimisation to work properly but it's giving me trouble. (This is the system where you store the R.id.xxx pointers in as a tag for each View to avoid having to call findViewById). I have a ListView populated with simple rows of an ImageView and some text, but the ImageView can be formatted either for portrait-sized images (tall and narrow) or landscape-sized images (short and wide). It's adjusting this formatting for each row which isn't working as I had hoped. The basic system is that to begin with, it inflates the layout for each row and sets the ImageView's settings based on the data, and includes an int denoting the orientation in the tag containing the R.id.xxx values. Then when it starts reusing convertViews, it checks this saved orientation against the orientation of the new row. The theory then is that if the orientation is the same, then the ImageView should already be set up correctly. If it isn't, then it sets the parameters for the ImageView as appropriate and updates the tag. However, I found that it was somehow getting confused; sometimes the tag would get out of sync with the orientation of the ImageView. For example, the tag would still say portrait, but the actual ImageView would still be in landscape layout. I couldn't find a pattern to how or when this happened; it wasn't consistent by orientation, position in the list or speed of scrolling. I can solve the problem by simply removing the check about convertView's orientation and simply always set the ImageView's parameters, but that seems to defeat the purpose of this optimisation. Can anyone see what I've done wrong in the code below? static LinearLayout.LayoutParams layoutParams; (...) public View getView(int position, View convertView, ViewGroup parent){ ReferenceHolder holder; if (convertView == null){ convertView = inflater.inflate(R.layout.pick_image_row, null); holder = new ReferenceHolder(); holder.getIdsAndSetTag(convertView, position); if (data[position][ORIENTATION] == LANDSCAPE) { // Layout defaults to portrait settings, so ImageView size needs adjusting. // layoutParams is modified here, with specific values for width, height, margins etc holder.image.setLayoutParams(layoutParams); } holder.orientation = data[position][ORIENTATION]; } else { holder = (ReferenceHolder) convertView.getTag(); if (holder.orientation != data[position][ORIENTATION]){ //This is the key if statement for my question switch (image[position][ORIENTATION]) { case PORTRAIT: // layoutParams is reset to the Portrait settings holder.orientation = data[position][ORIENTATION]; break; case LANDSCAPE: // layoutParams is reset to the Landscape settings holder.orientation = data[position][ORIENTATION]; break; } holder.image.setLayoutParams(layoutParams); } } // and the row's image and text is set here, using holder.image.xxx // and holder.text.xxx return convertView; } static class ReferenceHolder { ImageView image; TextView text; int orientation; void getIdsAndSetTag(View v, int position){ image = (ImageView) v.findViewById(R.id.pickImageImage); text = (TextView) v.findViewById(R.id.pickImageText); orientation = data[position][ORIENTATION]; v.setTag(this); } } Thanks!

    Read the article

  • Delphi: Minimize application to systray

    - by marco92w
    I want to minimize a Delphi application to the systray instead of the task bar. The necessary steps seem to be the following: Create icon which should then be displayed in the systray. When the user clicks the [-] to minimize the application, do the following: Hide the form. Add the icon (step #1) to the systray. Hide/delete the application's entry in the task bar. When the user double-clicks the application's icon in the systray, do the following: Show the form. Un-minimize the application again and bring it to the front. If "WindowState" is "WS_Minimized" set to "WS_Normal". Hide/delete the application's icon in the systray. When the user terminates the application, do the following: Hide/delete the application's icon in the systray. That's it. Right? How could one implement this in Delphi? I've found the following code but I don't know why it works. It doesn't follow my steps described above ... unit uMinimizeToTray; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, ShellApi; const WM_NOTIFYICON = WM_USER+333; type TMinimizeToTray = class(TForm) procedure FormCreate(Sender: TObject); procedure FormClose(Sender: TObject; var Action: TCloseAction); procedure CMClickIcon(var msg: TMessage); message WM_NOTIFYICON; private { Private-Deklarationen } public { Public-Deklarationen } end; var MinimizeToTray: TMinimizeToTray; implementation {$R *.dfm} procedure TMinimizeToTray.CMClickIcon(var msg: TMessage); begin if msg.lparam = WM_LBUTTONDBLCLK then Show; end; procedure TMinimizeToTray.FormCreate(Sender: TObject); VAR tnid: TNotifyIconData; HMainIcon: HICON; begin HMainIcon := LoadIcon(MainInstance, 'MAINICON'); Shell_NotifyIcon(NIM_DELETE, @tnid); tnid.cbSize := sizeof(TNotifyIconData); tnid.Wnd := handle; tnid.uID := 123; tnid.uFlags := NIF_MESSAGE or NIF_ICON or NIF_TIP; tnid.uCallbackMessage := WM_NOTIFYICON; tnid.hIcon := HMainIcon; tnid.szTip := 'Tooltip'; Shell_NotifyIcon(NIM_ADD, @tnid); end; procedure TMinimizeToTray.FormClose(Sender: TObject; var Action: TCloseAction); begin Action := caNone; Hide; end; end.

    Read the article

  • Populating ComboBoxDataColumn items and values

    - by MarceloRamires
    I have a "populate combobox", and I'm so happy with it that I've even started using more comboboxes. It takes the combobox object by reference with the ID of the "value set" (or whatever you want to call it) from a table and adds the items and their respective values (which differ) and does the job. I've recently had the brilliant idea of using comboboxes in a gridview, and I was happy to notice that it worked JUST LIKE a single combobox, but populating all the comboboxes in the given column at the same time. ObjComboBox.Items.Add("yadayada"); //works just like ObjComboBoxColumn.Items.Add("blablabla"); But When I started planning how to populate these comboboxes I've noticed: There's no "Values" property in ComboBoxDataColumn. ObjComboBox.Values = whateverArray; //works, but the following doesn't ObjComboBoxColumn.Values = whateverArray; Questions: 0 - How do I populate it's values ? (I suspect it's just as simple, but uses another name) 1 - If it works just like a combobox, what's the explanation for not having this attribute ? -----[EDIT]------ So I've checked out Charles' quote, and I've figured I had to change my way of populating these bad boys. Instead of looping through the strings and inserting them one by one in the combobox, I should grab the fields I want to populate in a table, and set one column of the table as the "value", and other one as the "display". So I've done this: ObjComboBoxColumn.DataSource = DTConfig; //Double checked, guaranteed to be populated ObjComboBoxColumn.ValueMember = "Code"; ObjComboBoxColumn.DisplayMember = "Description"; But nothing happens, if I use the same object as so: ObjComboBoxColumn.Items.Add("StackOverflow"); It is added. There is no DataBind() function. It finds the two columns, and that's guaranteed ("Code" and "Description") and if I change their names to nonexistant ones it gives me an exception, so that's a good sign. -----[EDIT]------ I have a table in SQL Server that is something like code  |  text —————    1    | foo    2    | bar It's simple, and with other comboboxes (outside of gridviews) i've successfully populated looping through the rows and adding the texts: ObjComboBox.Items.Add(MyDataTable.Rows[I]["MyColumnName"].ToString()); And getting every value, adding it into an array, and setting it like: ObjComboBox.Values = MyArray; I'd like to populate my comboboxColumns just as simply as I do with comboboxes.

    Read the article

  • Accurately display upload progress in Silverilght upload

    - by Matt
    I'm trying to debug a file upload / download issue I'm having. I've got a Silverlight file uploader, and to transmit the files I make use of the HttpWebRequest class. So I create a connection to my file upload handler on the server and begin transmitting. While a file uploads I keep track of total bytes written to the RequestStream so I can figure out a percentage. Now working at home I've got a rather slow connection, and I think Silverlight, or the browser, is lying to me. It seems that my upload progress logic is inaccurate. When I do multiple file uploads (24 images of 3-6mb big in my testing), the logic reports the files finish uploading but I believe that it only reflects the progress of written bytes to the RequestStream, not the actual amount of bytes uploaded. What is the most accurate way to measure upload progress. Here's the logic I'm using. public void Upload() { if( _TargetFile != null ) { Status = FileUploadStatus.Uploading; Abort = false; long diff = _TargetFile.Length - BytesUploaded; UriBuilder ub = new UriBuilder( App.siteUrl + "upload.ashx" ); bool complete = diff <= ChunkSize; ub.Query = string.Format( "{3}name={0}&StartByte={1}&Complete={2}", fileName, BytesUploaded, complete, string.IsNullOrEmpty( ub.Query ) ? "" : ub.Query.Remove( 0, 1 ) + "&" ); HttpWebRequest webrequest = ( HttpWebRequest ) WebRequest.Create( ub.Uri ); webrequest.Method = "POST"; webrequest.BeginGetRequestStream( WriteCallback, webrequest ); } } private void WriteCallback( IAsyncResult asynchronousResult ) { HttpWebRequest webrequest = ( HttpWebRequest ) asynchronousResult.AsyncState; // End the operation. Stream requestStream = webrequest.EndGetRequestStream( asynchronousResult ); byte[] buffer = new Byte[ 4096 ]; int bytesRead = 0; int tempTotal = 0; Stream fileStream = _TargetFile.OpenRead(); fileStream.Position = BytesUploaded; while( ( bytesRead = fileStream.Read( buffer, 0, buffer.Length ) ) != 0 && tempTotal + bytesRead < ChunkSize && !Abort ) { requestStream.Write( buffer, 0, bytesRead ); requestStream.Flush(); BytesUploaded += bytesRead; tempTotal += bytesRead; int percent = ( int ) ( ( BytesUploaded / ( double ) _TargetFile.Length ) * 100 ); UploadPercent = percent; if( UploadProgressChanged != null ) { UploadProgressChangedEventArgs args = new UploadProgressChangedEventArgs( percent, bytesRead, BytesUploaded, _TargetFile.Length, _TargetFile.Name ); SmartDispatcher.BeginInvoke( () => UploadProgressChanged( this, args ) ); } } //} // only close the stream if it came from the file, don't close resizestream so we don't have to resize it over again. fileStream.Close(); requestStream.Close(); webrequest.BeginGetResponse( ReadCallback, webrequest ); }

    Read the article

  • Moving from Linear Probing to Quadratic Probing (hash collisons)

    - by Nazgulled
    Hi, My current implementation of an Hash Table is using Linear Probing and now I want to move to Quadratic Probing (and later to chaining and maybe double hashing too). I've read a few articles, tutorials, wikipedia, etc... But I still don't know exactly what I should do. Linear Probing, basically, has a step of 1 and that's easy to do. When searching, inserting or removing an element from the Hash Table, I need to calculate an hash and for that I do this: index = hash_function(key) % table_size; Then, while searching, inserting or removing I loop through the table until I find a free bucket, like this: do { if(/* CHECK IF IT'S THE ELEMENT WE WANT */) { // FOUND ELEMENT return; } else { index = (index + 1) % table_size; } while(/* LOOP UNTIL IT'S NECESSARY */); As for Quadratic Probing, I think what I need to do is change how the "index" step size is calculated but that's what I don't understand how I should do it. I've seen various pieces of code, and all of them are somewhat different. Also, I've seen some implementations of Quadratic Probing where the hash function is changed to accommodated that (but not all of them). Is that change really needed or can I avoid modifying the hash function and still use Quadratic Probing? EDIT: After reading everything pointed out by Eli Bendersky below I think I got the general idea. Here's part of the code at http://eternallyconfuzzled.com/tuts/datastructures/jsw_tut_hashtable.aspx: 15 for ( step = 1; table->table[h] != EMPTY; step++ ) { 16 if ( compare ( key, table->table[h] ) == 0 ) 17 return 1; 18 19 /* Move forward by quadratically, wrap if necessary */ 20 h = ( h + ( step * step - step ) / 2 ) % table->size; 21 } There's 2 things I don't get... They say that quadratic probing is usually done using c(i)=i^2. However, in the code above, it's doing something more like c(i)=(i^2-i)/2 I was ready to implement this on my code but I would simply do: index = (index + (index^index)) % table_size; ...and not: index = (index + (index^index - index)/2) % table_size; If anything, I would do: index = (index + (index^index)/2) % table_size; ...cause I've seen other code examples diving by two. Although I don't understand why... 1) Why is it subtracting the step? 2) Why is it diving it by 2?

    Read the article

  • NSObject release destroys local copy of object's data

    - by Spider-Paddy
    I know this is something stupid on my part but I don't get what's happening. I create an object that fetches data & puts it into an array in a specific format, since it fetches asynchronously (has to download & parse data) I put a delegate method into the object that needs the data so that the data fetching object copies it's formatted array into an array in the calling object. The problem is that when the data fetching object is released, the copy it created in the caller is being erased, code is: In .h file @property (nonatomic, retain) NSArray *imagesDataSource; In .m file // Fetch item details ImagesParser *imagesParserObject = [[ImagesParser alloc] init:self]; [imagesParserObject getArticleImagesOfArticleId:(NSInteger)currentArticleId]; [imagesParserObject release] <-- problematic release // Called by parser when images parsing is finished -(void)imagesDataTransferComplete:(ImagesParser *)imagesParserObject { self.imagesDataSource = [ImagesParserObject.returnedArray copy]; // copy array to local variable // If there are more pics, they must be assembled in an array for possible UIImageView animation NSInteger picCount = [imagesDataSource count]; if(picCount > 1) // 1 image is assumed to be the pic already displayed { // Build image array NSMutableArray *tempPicArray = [[NSMutableArray alloc] init]; // Temp space to hold images while building for(int i = 0; i < picCount; i++) { // Get Nr from only article in detailDataSource & pic name (Small) from each item in imagesDataSource NSString *picAddress = [NSString stringWithFormat:@"http://some.url.com/shopdata/image/article/%@/%@", [[detailDataSource objectAtIndex:0] objectForKey:@"Nr"], [[imagesDataSource objectAtIndex:i] objectForKey:@"Small"]]; NSURL *picURL = [NSURL URLWithString:picAddress]; NSData *picData = [NSData dataWithContentsOfURL:picURL]; [tempPicArray addObject:[UIImage imageWithData:picData]]; } imagesArray = [tempPicArray copy]; // copy makes immutable copy of array [tempPicArray release]; currentPicIndex = 0; // Assume first pic is pic already being shown } else imagesArray = nil; // No need for a needless pic array // Remove please wait message [pleaseWaitViewControllerObject.view removeFromSuperview]; } I put in tons of NSLog lines to keep track of what was going on & self.imagesDataSource is populated with the returned array but when the parser object is released self.imagesDataSource becomes empty. I thought self.imagesDataSource = [ImagesParserObject.returnedArray copy]; is supposed to make an independant object, like as if it was alloc, init'ed, so that self.imagesDataSource is not just a pointer to the parser's array but is it's own array. So why does the release of the parser object clear the copy of the array. (I checked & double checked that it's not something overwriting self.imagesDataSource, commenting out [imagesParserObject release] consistently fixes the problem) Also, I have exactly the same problem with self.detailDataSource which is declared & populated in the exact same way as self.imagesDataSource I thought that once I call the parser I could release it because the caller no longer needs to refer to it, all further activity is carried out by the parser object through it's delegate method, what am I doing wrong?

    Read the article

  • Spring Security 3.1 xsd and jars mismatch issue

    - by kmansoor
    I'm Trying to migrate from spring framework 3.0.5 to 3.1 and spring-security 3.0.5 to 3.1 (not to mention hibernate 3.6 to 4.1). Using Apache IVY. I'm getting the following error trying to start Tomcat 7.23 within Eclipse Helios (among a host of others, however this is the last in the console): org.springframework.beans.factory.BeanDefinitionStoreException: Line 7 in XML document from ServletContext resource [/WEB-INF/focus-security.xml] is invalid; nested exception is org.xml.sax.SAXParseException: Document root element "beans:beans", must match DOCTYPE root "null". org.xml.sax.SAXParseException: Document root element "beans:beans", must match DOCTYPE root "null". my security config file looks like this: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/security" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:jdbc="http://www.springframework.org/schema/jdbc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.1.xsd http://www.springframework.org/schema/security http://www.springframework.org/schema/security/spring-security-3.1.xsd http://www.springframework.org/schema/jdbc http://www.springframework.org/schema/jdbc/spring-jdbc-3.1.xsd"> Ivy.xml looks like this: <dependencies> <dependency org="org.hibernate" name="hibernate-core" rev="4.1.7.Final"/> <dependency org="org.hibernate" name="com.springsource.org.hibernate.validator" rev="4.2.0.Final" /> <dependency org="org.hibernate.javax.persistence" name="hibernate-jpa-2.0-api" rev="1.0.1.Final"/> <dependency org="org.hibernate" name="hibernate-entitymanager" rev="4.1.7.Final"/> <dependency org="org.hibernate" name="hibernate-validator" rev="4.3.0.Final"/> <dependency org="org.springframework" name="spring-context" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-web" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-tx" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-webmvc" rev="3.1.2.RELEASE"/> <dependency org="org.springframework" name="spring-test" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-core" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-web" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-config" rev="3.1.2.RELEASE"/> <dependency org="org.springframework.security" name="spring-security-taglibs" rev="3.1.2.RELEASE"/> <dependency org="net.sf.dozer" name="dozer" rev="5.3.2"/> <dependency org="org.apache.poi" name="poi" rev="3.8"/> <dependency org="commons-io" name="commons-io" rev="2.4"/> <dependency org="org.slf4j" name="slf4j-api" rev="1.6.6"/> <dependency org="org.slf4j" name="slf4j-log4j12" rev="1.6.6"/> <dependency org="org.slf4j" name="slf4j-ext" rev="1.6.6"/> <dependency org="log4j" name="log4j" rev="1.2.17"/> <dependency org="org.testng" name="testng" rev="6.8"/> <dependency org="org.dbunit" name="dbunit" rev="2.4.8"/> <dependency org="org.easymock" name="easymock" rev="3.1"/> </dependencies> I understand (hope) this error is due to a mismatch between the declared xsd and the jars on the classpath. Any pointers will be greatly appreciated.

    Read the article

  • Is it worth moving from stored procedures to linq ?

    - by Josef
    I'm looking at standardizing programming in an organisaiton. Half uses stored procedures and the other half Linq. From what i've read there is still some debate going on on this topic. My concern is that MS is trying to slip in it's own proprietry query language 'linq' to make SQL redundant. If a few years back microsoft had tried to win customers from oracle and sybase with their MSSQL database and stated that it didn't use SQL by their own proprietry query langues ie linq. I doubt many would have switched. I believe that is exactly what is happening now by introducting it into the applicaiton business layer. I have used MS for many years but there is one gripe that I have with them and that is that they change their direction a lot. By a lot I mean new releases of .net, silverlight etc are more than 30% different from previous version. So by the time you become productive a new release is on the way. As things stand now a web developer using .net would need to know either vb.net or c#, xml, xaml,javascript,html, sql and now linq. That doesn't make for good productivity in my books. My concern is that once we all start using linq MS will start changing it between releases. and it will become an ever changing landscape. I believe that 'linq to sql' has already been deprecated. At leas with SQL we are dealing with a more stable and standardized language. Are we looking at a programming revolution or a marketing campaign? As far as I know other languages like Cobol have stayed the same for years. A cobol program from 20 years ago could pick up todays code and start working on it. Could a Vb3 person work on a modern .net web app ? Would these large changes need to be made if the underlying original foundation had been sound ? I worry about following MS shaking roadmap with it's deadends and double backs. are there any architects out there who feel the same ? regards Josef

    Read the article

  • How do i rotate a CALayer around a diagonal axis?

    - by Mattias Wadman
    Hi, im trying to implement a flip animation to be used in board game like application. The animation is suppose to look like a game piece that rotate and change to the color of its back. I have managed to get a animation that flips around orthogonal axis, but when i try to flip around a diagonal axis by changing the rotation around the z-axis not surprisingly the actual image also gets rotated. Instead i would like to rotate the image as it is around a diagonal axis. I have tried to change layer.sublayerTransform but with no success. Here is the current implementation. It works by doing a trick to resolve the issue of getting a mirrored image at the end of the animation. The solution is to not actually rotate the layer 180 degrees, instead it rotates it 90 degrees, changes image and then rotates it back. + (void)flipLayer:(CALayer *)layer toImage:(CGImageRef)image withAngle:(double)angle { const float duration = 0.5f; CAKeyframeAnimation *diag = [CAKeyframeAnimation animationWithKeyPath:@"transform.rotation.z"]; diag.duration = duration; diag.values = [NSArray arrayWithObjects: [NSNumber numberWithDouble:angle], [NSNumber numberWithDouble:0.0f], nil]; diag.keyTimes = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], [NSNumber numberWithDouble:1.0f], nil]; diag.calculationMode = kCAAnimationDiscrete; CAKeyframeAnimation *flip = [CAKeyframeAnimation animationWithKeyPath:@"transform.rotation.y"]; flip.duration = duration; flip.values = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], [NSNumber numberWithDouble:M_PI / 2], [NSNumber numberWithDouble:0.0f], nil]; flip.keyTimes = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], [NSNumber numberWithDouble:0.5f], [NSNumber numberWithDouble:1.0f], nil]; flip.calculationMode = kCAAnimationLinear; CAKeyframeAnimation *replace = [CAKeyframeAnimation animationWithKeyPath:@"contents"]; replace.duration = duration / 2; replace.beginTime = duration / 2; replace.values = [NSArray arrayWithObjects:(id)image, nil]; replace.keyTimes = [NSArray arrayWithObjects: [NSNumber numberWithDouble:0.0f], nil]; replace.calculationMode = kCAAnimationDiscrete; CAAnimationGroup *group = [CAAnimationGroup animation]; group.removedOnCompletion = NO; group.duration = duration; group.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionLinear]; group.animations = [NSArray arrayWithObjects:diag, flip, replace, nil]; group.fillMode = kCAFillModeForwards; [layer addAnimation:group forKey:nil]; }

    Read the article

  • Exporting to XML, including embedded classes

    - by Andy
    I have an object config which has some properties. I can export this ok, however, it also has an ArrayList which relates to embedded classes which I can't get to appear when I export to XML. Any pointers would be helpful. Export Method public String exportXML(config conf, String path) { String success = ""; try { FileOutputStream fstream = new FileOutputStream(path); try { XMLEncoder ostream = new XMLEncoder(fstream); try { ostream.writeObject(conf); ostream.flush(); } finally { ostream.close(); } } finally { fstream.close(); } } catch (Exception ex) { success = ex.getLocalizedMessage(); } return success; } Config Class (some detail stripped to keep size down) public class config { protected String author = ""; protected String website = ""; private ArrayList questions = new ArrayList(); public config(){ } public void addQuestion(String name) { questions.add(new question(questions.size(), name)); } public void removeQuestion(int id) { questions.remove(id); for (int c = 0; c <= questions.size(); c++) { question q = (question) (questions.get(id)); q.setId(c); } questions.trimToSize(); } public config.question getQuestion(int id){ return (question)questions.get(id); } /** * There can be multiple questions per config. * Questions store all the information regarding what questions are * asked of the user, including images, descriptions, and answers. */ public class question { protected int id; protected String title; protected ArrayList answers; public question(int id, String title) { this.id = id; this.title = title; } public int getId() { return id; } public void setId(int id) { this.id = id; } public void addAnswer(String name) { answers.add(new answer(answers.size(), name)); } public void removeAnswer(int id) { answers.remove(id); for (int c = 0; c <= answers.size(); c++) { answer a = (answer) (answers.get(id)); a.setId(c); } answers.trimToSize(); } public config.question.answer getAnswer(int id){ return (answer)answers.get(id); } public class answer { protected int id; protected String title; public answer(int id, String title) { this.id = id; this.title = title; } public int getId() { return id; } public void setId(int id) { this.id = id; } } } } Resultant XML File <?xml version="1.0" encoding="UTF-8"?> <java version="1.6.0_18" class="java.beans.XMLDecoder"> <object class="libConfig.config"> <void property="appName"> <string>xxx</string> </void> <void property="author"> <string>Andy</string> </void> <void property="website"> <string>www.example.com/dsx.xml</string> </void> </object> </java>

    Read the article

  • Maven best practice for generating multiple jars with different/filtered classes ?

    - by jaguard
    I developed a Java utility library (similarly to Apache Commons) that I use in various projects. Additionally to fat clients I also use it for mobile clients (PDA with J9 Foundation profile). In time the library that started as a single project spread over multiple packages. As a result I end up with a lot of functionality but not really needed in all the projects. Since this library is also used inside some mobile/PDA projects I need a way to collect just the used classes and generate the actual specialized jars Currently in the projects that area using this library, I have Ant jar tasks that generate (from the utility project) the specialized jar files (ex: my-util-1.0-pda.jar, my-util-1.0-rcp.jar) using include/exclude jar task features. This is mostly needed due to the generated jar size constraints for the mobile projects. Migrating now to Maven I just wonder if there are any best practices to arrive to something similar so I consider the following scenarios: [1] - additionally to the main jar artifact (my-lib-1.0.jar) also generating inside my-lib project the separate/specialized artifacts using classifiers (ex: my-lib-1.0-pda.jar) using Maven Jar Plugin or Maven Assembly Plugin filtering/includes ... I'm not very comfortable with this approach since it pollute the library with library consumers demands (filters) [2] - Create additional Maven projects for all the specialized clients/projects, that will "wrap" the "my-lib" and generate the filtered jar artifacts (ex: my-lib-wrapper-pda-1.0 ...etc). As a result, these wrapper projects will include the filtering (to generate the filtered artifact) and will depend just on the "my-lib" project and the client projects will depend on my-lib-wrapper-xxx-1.0 instead of my-lib-1.0. This approach my look problematic since even that will let "my-lib" project intact (with no additional classifiers & artifacts), basically will double the number of projects since for every client project I'll have one just to collect the needed classes from the "my-util" library ("my-pda-app" project will need a "my-lib-wrapper-for-my-pda-app" project/dependency) [3] - Into the every client project that use the library (ex: my-pda-app) add some specialized Maven plugins to trim - out (when generating the final artifact/package) the un-needed classes (ex: maven-assembly-plugin, maven-jar-plugin, proguard-maven-plugin) What is the best practice for solving this kind of problems in the "Maven way" ?!

    Read the article

  • Python/numpy tricky slicing problem

    - by daver
    Hi stack overflow, I have a problem with some numpy stuff. I need a numpy array to behave in an unusual manner by returning a slice as a view of the data I have sliced, not a copy. So heres an example of what I want to do: Say we have a simple array like this: a = array([1, 0, 0, 0]) I would like to update consecutive entries in the array (moving left to right) with the previous entry from the array, using syntax like this: a[1:] = a[0:3] This would get the following result: a = array([1, 1, 1, 1]) Or something like this: a[1:] = 2*a[:3] # a = [1,2,4,8] To illustrate further I want the following kind of behaviour: for i in range(len(a)): if i == 0 or i+1 == len(a): continue a[i+1] = a[i] Except I want the speed of numpy. The default behavior of numpy is to take a copy of the slice, so what I actually get is this: a = array([1, 1, 0, 0]) I already have this array as a subclass of the ndarray, so I can make further changes to it if need be, I just need the slice on the right hand side to be continually updated as it updates the slice on the left hand side. Am I dreaming or is this magic possible? Update: This is all because I am trying to use Gauss-Seidel iteration to solve a linear algebra problem, more or less. It is a special case involving harmonic functions, I was trying to avoid going into this because its really not necessary and likely to confuse things further, but here goes. The algorithm is this: while not converged: for i in range(len(u[:,0])): for j in range(len(u[0,:])): # skip over boundary entries, i,j == 0 or len(u) u[i,j] = 0.25*(u[i-1,j] + u[i+1,j] + u[i, j-1] + u[i,j+1]) Right? But you can do this two ways, Jacobi involves updating each element with its neighbours without considering updates you have already made until the while loop cycles, to do it in loops you would copy the array then update one array from the copied array. However Gauss-Seidel uses information you have already updated for each of the i-1 and j-1 entries, thus no need for a copy, the loop should essentially 'know' since the array has been re-evaluated after each single element update. That is to say, every time we call up an entry like u[i-1,j] or u[i,j-1] the information calculated in the previous loop will be there. I want to replace this slow and ugly nested loop situation with one nice clean line of code using numpy slicing: u[1:-1,1:-1] = 0.25(u[:-2,1:-1] + u[2:,1:-1] + u[1:-1,:-2] + u[1:-1,2:]) But the result is Jacobi iteration because when you take a slice: u[:,-2,1:-1] you copy the data, thus the slice is not aware of any updates made. Now numpy still loops right? Its not parallel its just a faster way to loop that looks like a parallel operation in python. I want to exploit this behaviour by sort of hacking numpy to return a pointer instead of a copy when I take a slice. Right? Then every time numpy loops, that slice will 'update' or really just replicate whatever happened in the update. To do this I need slices on both sides of the array to be pointers. Anyway if there is some really really clever person out there that awesome, but I've pretty much resigned myself to believing the only answer is to loop in C.

    Read the article

  • Creating a System::String object from a BSTR in Managed C++ - is this way a good idea???

    - by Eli
    My co-worker is filling a System::String object with double-byte characters from an unmanaged library by the following method: RFC_PARAMETER aux; Object* target; RFC_UNICODE_TYPE_ELEMENT* elm; elm = &(m_coreObject->m_pStructMeta->m_typeElements[index]); aux.name = NULL; aux.nlen = 0; aux.type = elm->type; aux.leng = elm->c2_length; aux.addr = m_coreObject->m_rfcWa + elm->c2_offset; GlobalFunctions::CreateObjectForRFCField(target,aux,elm->decimals); GlobalFunctions::ReadRFCField(target,aux,elm->decimals); Where GlobalFunctions::CreateObjectForRFCField creates a System::String object filled with spaces (for padding) to what the unmanaged library states the max length should be: static void CreateObjectForRFCField(Object*& object, RFC_PARAMETER& par, unsigned dec) { switch (par.type) { case TYPC: object = new String(' ',par.leng / sizeof(_TCHAR)); break; // unimportant afterwards. } } And GlobalFunctions::ReadRFCField() copies the data from the library into the created String object and preserves the space padding: static void ReadRFCField(String* target, RFC_PARAMETER& par) { int lngt; _TCHAR* srce; switch (par.type) { case TYPC: case TYPDATE: case TYPTIME: case TYPNUM: lngt = par.leng / sizeof(_TCHAR); srce = (_TCHAR*)par.addr; break; case RFCTYPE_STRING: lngt = (*(_TCHAR**)par.addr != NULL) ? (int)_tcslen(*(_TCHAR**)par.addr) : 0; srce = *(_TCHAR**)par.addr; break; default: throw new DotNet_Incomp_RFCType2; } if (lngt > target->Length) lngt = target->Length; GCHandle gh = GCHandle::Alloc(target,GCHandleType::Pinned); wchar_t* buff = reinterpret_cast<wchar_t*>(gh.AddrOfPinnedObject().ToPointer()); _wcsnset(buff,' ',target->Length); _snwprintf(buff,lngt,_T2WFSP,srce); gh.Free(); } Now, on occasion, we see access violations getting thrown in the _snwprintf call. My question really is: Is it appropriate to create a string padded to a length (ideally to pre-allocate the internal buffer), and then to modify the String using GCHandle::Alloc and the mess above. And yes, I know that System::String objects are supposed to be immutable - I'm looking for a definitive "This is WRONG and here is why". Thanks, Eli.

    Read the article

  • MySQL "ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150)"

    - by Ankur Banerjee
    Hi, I was working on creating some tables in database foo, but every time I end up with errno 150 regarding the foreign key. Firstly, here's my code for creating tables: CREATE TABLE Clients ( client_id CHAR(10) NOT NULL , client_name CHAR(50) NOT NULL , provisional_license_num CHAR(50) NOT NULL , client_address CHAR(50) NULL , client_city CHAR(50) NULL , client_county CHAR(50) NULL , client_zip CHAR(10) NULL , client_phone INT NULL , client_email CHAR(255) NULL , client_dob DATETIME NULL , test_attempts INT NULL ); CREATE TABLE Applications ( application_id CHAR(10) NOT NULL , office_id INT NOT NULL , client_id CHAR(10) NOT NULL , instructor_id CHAR(10) NOT NULL , car_id CHAR(10) NOT NULL , application_date DATETIME NULL ); CREATE TABLE Instructors ( instructor_id CHAR(10) NOT NULL , office_id INT NOT NULL , instructor_name CHAR(50) NOT NULL , instructor_address CHAR(50) NULL , instructor_city CHAR(50) NULL , instructor_county CHAR(50) NULL , instructor_zip CHAR(10) NULL , instructor_phone INT NULL , instructor_email CHAR(255) NULL , instructor_dob DATETIME NULL , lessons_given INT NULL ); CREATE TABLE Cars ( car_id CHAR(10) NOT NULL , office_id INT NOT NULL , engine_serial_num CHAR(10) NULL , registration_num CHAR(10) NULL , car_make CHAR(50) NULL , car_model CHAR(50) NULL ); CREATE TABLE Offices ( office_id INT NOT NULL , office_address CHAR(50) NULL , office_city CHAR(50) NULL , office_County CHAR(50) NULL , office_zip CHAR(10) NULL , office_phone INT NULL , office_email CHAR(255) NULL ); CREATE TABLE Lessons ( lesson_num INT NOT NULL , client_id CHAR(10) NOT NULL , date DATETIME NOT NULL , time DATETIME NOT NULL , milegage_used DECIMAL(5, 2) NULL , progress CHAR(50) NULL ); CREATE TABLE DrivingTests ( test_num INT NOT NULL , client_id CHAR(10) NOT NULL , test_date DATETIME NOT NULL , seat_num INT NOT NULL , score INT NULL , test_notes CHAR(255) NULL ); ALTER TABLE Clients ADD PRIMARY KEY (client_id); ALTER TABLE Applications ADD PRIMARY KEY (application_id); ALTER TABLE Instructors ADD PRIMARY KEY (instructor_id); ALTER TABLE Offices ADD PRIMARY KEY (office_id); ALTER TABLE Lessons ADD PRIMARY KEY (lesson_num); ALTER TABLE DrivingTests ADD PRIMARY KEY (test_num); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Instructors FOREIGN KEY (instructor_id) REFERENCES Instructors (instructor_id); ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ALTER TABLE Lessons ADD CONSTRAINT FK_Lessons_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); ALTER TABLE Cars ADD CONSTRAINT FK_Cars_Offices FOREIGN KEY (office_id) REFERENCES Offices (office_id); ALTER TABLE Clients ADD CONSTRAINT FK_DrivingTests_Clients FOREIGN KEY (client_id) REFERENCES Clients (client_id); These are the errors that I get: mysql> ALTER TABLE Applications ADD CONSTRAINT FK_Applications_Cars FOREIGN KEY (car_id) REFERENCES Cars (car_id); ERROR 1005 (HY000): Can't create table 'foo.#sql-12c_4' (errno: 150) I ran SHOW ENGINE INNODB STATUS which gives a more detailed error description: ------------------------ LATEST FOREIGN KEY ERROR ------------------------ 100509 20:59:49 Error in foreign key constraint of table practice9/#sql-12c_4: FOREIGN KEY (car_id) REFERENCES Cars (car_id): Cannot find an index in the referenced table where the referenced columns appear as the first columns, or column types in the table and the referenced table do not match for constraint. Note that the internal storage type of ENUM and SET changed in tables created with >= InnoDB-4.1.12, and such columns in old tables cannot be referenced by such columns in new tables. See http://dev.mysql.com/doc/refman/5.1/en/innodb-foreign-key-constraints.html for correct foreign key definition. ------------ I searched around on StackOverflow and elsewhere online - came across a helpful blog post here with pointers on how to resolve this error - but I can't figure out what's going wrong. Any help would be appreciated!

    Read the article

  • Compile error C++: could not deduce template argument for 'T'

    - by OneShot
    I'm trying to read binary data to load structs back into memory so I can edit them and save them back to the .dat file. readVector() attempts to read the file, and return the vectors that were serialized. But i'm getting this compile error when I try and run it. What am I doing wrong with my templates? ***** EDIT ************** Code: // Project 5.cpp : main project file. #include "stdafx.h" #include <iostream> #include <fstream> #include <string> #include <vector> #include <algorithm> using namespace System; using namespace std; #pragma hdrstop int checkCommand (string line); template<typename T> void writeVector(ofstream &out, const vector<T> &vec); template<typename T> vector<T> readVector(ifstream &in); struct InventoryItem { string Item; string Description; int Quantity; int wholesaleCost; int retailCost; int dateAdded; } ; int main(void) { cout << "Welcome to the Inventory Manager extreme! [Version 1.0]" << endl; ifstream in("data.dat"); vector<InventoryItem> structList; readVector<InventoryItem>( in ); while (1) { string line = ""; cout << endl; cout << "Commands: " << endl; cout << "1: Add a new record " << endl; cout << "2: Display a record " << endl; cout << "3: Edit a current record " << endl; cout << "4: Exit the program " << endl; cout << endl; cout << "Enter a command 1-4: "; getline(cin , line); int rValue = checkCommand(line); if (rValue == 1) { cout << "You've entered a invalid command! Try Again." << endl; } else if (rValue == 2){ cout << "Error calling command!" << endl; } else if (!rValue) { break; } } system("pause"); return 0; } int checkCommand (string line) { int intReturn = atoi(line.c_str()); int status = 3; switch (intReturn) { case 1: break; case 2: break; case 3: break; case 4: status = 0; break; default: status = 1; break; } return status; } template<typename T> void writeVector(ofstream &out, const vector<T> &vec) { out << vec.size(); for(vector<T>::const_iterator i = vec.begin(); i != vec.end(); i++) { out << *i; } } ostream& operator<<(std::ostream &strm, const InventoryItem &i) { return strm << i.Item << " (" << i.Description << ")"; } template<typename T> vector<T> readVector(ifstream &in) { size_t size; in >> size; vector<T> vec; vec.reserve(size); for(int i = 0; i < size; i++) { T tmp; in >> tmp; vec.push_back(tmp); } return vec; } Compiler errors: 1>------ Build started: Project: Project 5, Configuration: Debug Win32 ------ 1>Compiling... 1>Project 5.cpp 1>.\Project 5.cpp(124) : warning C4018: '<' : signed/unsigned mismatch 1> .\Project 5.cpp(40) : see reference to function template instantiation 'std::vector<_Ty> readVector<InventoryItem>(std::ifstream &)' being compiled 1> with 1> [ 1> _Ty=InventoryItem 1> ] 1>.\Project 5.cpp(127) : error C2679: binary '>>' : no operator found which takes a right-hand operand of type 'InventoryItem' (or there is no acceptable conversion) 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1144): could be 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,signed char *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1146): or 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,signed char &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1148): or 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,unsigned char *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(1150): or 'std::basic_istream<_Elem,_Traits> &std::operator >><std::char_traits<char>>(std::basic_istream<_Elem,_Traits> &,unsigned char &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(155): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::basic_istream<_Elem,_Traits> &(__cdecl *)(std::basic_istream<_Elem,_Traits> &))' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(161): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::basic_ios<_Elem,_Traits> &(__cdecl *)(std::basic_ios<_Elem,_Traits> &))' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(168): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::ios_base &(__cdecl *)(std::ios_base &))' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(175): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::_Bool &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(194): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(short &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(228): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(unsigned short &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(247): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(int &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(273): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(unsigned int &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(291): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(long &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(309): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(__w64 unsigned long &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(329): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(__int64 &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(348): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(unsigned __int64 &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(367): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(float &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(386): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(double &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(404): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(long double &)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(422): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(void *&)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> C:\Program Files (x86)\Microsoft Visual Studio 9.0\VC\include\istream(441): or 'std::basic_istream<_Elem,_Traits> &std::basic_istream<_Elem,_Traits>::operator >>(std::basic_streambuf<_Elem,_Traits> *)' 1> with 1> [ 1> _Elem=char, 1> _Traits=std::char_traits<char> 1> ] 1> while trying to match the argument list '(std::ifstream, InventoryItem)' 1>Build log was saved at "file://c:\Users\Owner\Documents\Visual Studio 2008\Projects\Project 5\Project 5\Debug\BuildLog.htm" 1>Project 5 - 1 error(s), 1 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Oh my god...I fixed that error I think and now I got another one. Will you PLEASE just help me on this one too! What the heck does this mean ??

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • Tree Node Checked behavior on a TreeView in Compact Framework 3.5 running on Windows Mobile 6.5

    - by Hydroslide
    I have been upgrading an existing .NET Windows Mobile application to use the 3.5 version of the compact framework and to run on Windows Mobile 6.5. I have a form with a TreeView. The TreeView.Checkboxes property is set to true so that each node has a check box. This gives no trouble in all previous versions of Windows Mobile. However, in version 6.5 when you click on a check box it appears to check and then uncheck instantaneously. But it only raises the AfterCheck event once. The only way I can get a check to stick is by double clicking it (which is the wrong behavior). Has anyone seen this behavior? Does anyone know of a workaround for it? I have included a simple test form. Dump this form into a Visual Studio 2008 Smart Device application targeted at Windows Mobile 6 to see what I mean. Public Class frmTree Inherits System.Windows.Forms.Form #Region " Windows Form Designer generated code " Public Sub New() MyBase.new() ' This call is required by the Windows Form Designer. InitializeComponent() ' Add any initialization after the InitializeComponent() call. End Sub 'Form overrides dispose to clean up the component list. <System.Diagnostics.DebuggerNonUserCode()> _ Protected Overrides Sub Dispose(ByVal disposing As Boolean) If disposing AndAlso components IsNot Nothing Then components.Dispose() End If MyBase.Dispose(disposing) End Sub 'Required by the Windows Form Designer Private components As System.ComponentModel.IContainer Friend WithEvents TreeView1 As System.Windows.Forms.TreeView Private mainMenu1 As System.Windows.Forms.MainMenu 'NOTE: The following procedure is required by the Windows Form Designer 'It can be modified using the Windows Form Designer. 'Do not modify it using the code editor. <System.Diagnostics.DebuggerStepThrough()> _ Private Sub InitializeComponent() Dim TreeNode1 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node0") Dim TreeNode2 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node2") Dim TreeNode3 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node3") Dim TreeNode4 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node4") Dim TreeNode5 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node1") Dim TreeNode6 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node5") Dim TreeNode7 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node6") Dim TreeNode8 As System.Windows.Forms.TreeNode = New System.Windows.Forms.TreeNode("Node7") Me.mainMenu1 = New System.Windows.Forms.MainMenu Me.TreeView1 = New System.Windows.Forms.TreeView Me.SuspendLayout() ' 'TreeView1 ' Me.TreeView1.CheckBoxes = True Me.TreeView1.Location = New System.Drawing.Point(37, 41) Me.TreeView1.Name = "TreeView1" TreeNode2.Text = "Node2" TreeNode3.Text = "Node3" TreeNode4.Text = "Node4" TreeNode1.Nodes.AddRange(New System.Windows.Forms.TreeNode() {TreeNode2, TreeNode3, TreeNode4}) TreeNode1.Text = "Node0" TreeNode6.Text = "Node5" TreeNode7.Text = "Node6" TreeNode8.Text = "Node7" TreeNode5.Nodes.AddRange(New System.Windows.Forms.TreeNode() {TreeNode6, TreeNode7, TreeNode8}) TreeNode5.Text = "Node1" Me.TreeView1.Nodes.AddRange(New System.Windows.Forms.TreeNode() {TreeNode1, TreeNode5}) Me.TreeView1.Size = New System.Drawing.Size(171, 179) Me.TreeView1.TabIndex = 0 ' 'frmTree ' Me.AutoScaleDimensions = New System.Drawing.SizeF(96.0!, 96.0!) Me.AutoScaleMode = System.Windows.Forms.AutoScaleMode.Dpi Me.AutoScroll = True Me.ClientSize = New System.Drawing.Size(240, 268) Me.Controls.Add(Me.TreeView1) Me.Menu = Me.mainMenu1 Me.Name = "frmTree" Me.Text = "frmTree" Me.ResumeLayout(False) End Sub #End Region End Class

    Read the article

  • Adding Parsekit To An Xcode Project

    - by Garry
    I am trying to add the Parsekit framework to my OSX Xcode project. I've never added a 3rd party framework before and I can't get it to work right. I dragged the included Xcode project into my 'Groups & Files' pane and chose to Add it to my project. I then dragged Parsekit.framework underneath the Link Binary With Libraries heading. Then I double-clicked my target app and added Parsekit as a Direct Dependency. I also added libicucore.dylib as a Linked Library (as it says to do this on their site). Finally, in the Build settings tab of my target info I set the Header Search Paths to /Users/path/to/include/directory and the Other Linker Flags to -ObjC -all_load. Running this as a debug build work fine with no errors. However, when I build my app to release and then try to run the executable created, the app fails to load with the following error message: MyApp cannot be opened because of a problem. Check with the developer to make sure myApp works with this version of Mac OS X. Here is the dump from the crash reporter: Process: MyApp [11658] Path: /Users/Garry/Programming/Xcode/Mac/MyApp/build/Release/MyApp.app/Contents/MacOS/MyApp Identifier: com.yourcompany.MyApp Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [135] Date/Time: 2010-05-24 17:08:08.475 +0100 OS Version: Mac OS X 10.6.3 (10D573) Report Version: 6Interval Since Last Report: 133300 sec Crashes Since Last Report: 3 Per-App Crashes Since Last Report: 3 Anonymous UUID: DF0265E4-B5A0-45E1-8B71-D52A27CFDDCA Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000002, 0x0000000000000000 Crashed Thread: 0 Dyld Error Message: Library not loaded: @executable_path/../Frameworks/ParseKit.framework/Versions/A/ParseKit Referenced from: /Users/Garry/Programming/Xcode/Mac/MyApp/build/Release/MyApp.app/Contents/MacOS/MyApp Reason: image not found Model: MacBookPro5,5, BootROM MBP55.00AC.B03, 2 processors, Intel Core 2 Duo, 2.53 GHz, 4 GB, SMC 1.47f2 Graphics: NVIDIA GeForce 9400M, NVIDIA GeForce 9400M, PCI, 256 MB Memory Module: global_name AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.10.91.27) Bluetooth: Version 2.3.1f4, 2 service, 2 devices, 1 incoming serial ports Network Service: AirPort, AirPort, en1 Network Service: Ethernet Adaptor (en6), Ethernet, en6 Serial ATA Device: Hitachi HTS545025B9SA02, 232.89 GB Serial ATA Device: HL-DT-ST DVDRW GS23N USB Device: Built-in iSight, 0x05ac (Apple Inc.), 0x8507, 0x24400000 USB Device: Internal Memory Card Reader, 0x05ac (Apple Inc.), 0x8403, 0x26500000 USB Device: IR Receiver, 0x05ac (Apple Inc.), 0x8242, 0x04500000 USB Device: Apple Internal Keyboard / Trackpad, 0x05ac (Apple Inc.), 0x0237, 0x04600000 USB Device: BRCM2046 Hub, 0x0a5c (Broadcom Corp.), 0x4500, 0x06100000 USB Device: Bluetooth USB Host Controller, 0x05ac (Apple Inc.), 0x8213, 0x06110000 After building the app, in addition to the executable file, Xcode is also creating a file called MyApp.app.dSYM. Any idea what that is?? I am developing with Xcode 3.2.2 on an Intel MBP running 10.6.3. Many thanks for any help offered.

    Read the article

  • Which mobile operating system should I code for?

    - by samgoody
    It seems as though mobile computing has fully arrived. I would like to rewrite two of our programs for mobile devices, but am a bit lost as to which platform to target. Complicating this decision: I would need to learn the relevant languages and IDEs - my coding to date has been almost all web based (PHP, JS, Actionscript, etc. Some ASPX). Most users seem to be religious about their mobile decision, so oral conversations leave me more confused then enlightened. I do not yet own a smartphone - will have to buy one once I know which platform to be aiming for. Both of my programs are more for business users, (one is only useful for C.P.A.s). I am a single developer, and cannot develop for more than one platform at a time. Getting it right is important. Based on what I've found on the web, I would've expected RIM to be a shoo-in, and the general order to be as follows: RIM Blackberry - More of them than any other brand. Despite naysayers, they've had double the sales (or perhaps 5X the sales) of any other smartphone, and have continued to grow. And, they have business users. Android - According to Schmidt, they have outsold everyone else except RIM (though I can't find where I read that now), and they are just getting started. According to Comscore, they are already at 8% of the market and expected to hit Shcmidt's claims within six months. Nokia - The largest worldwide. If they would just make up between Maemo or Symbian, I would be far less confused. iPhone - Much more competition by other apps, fewer sales to be had, and a overlord that can delay or cancel my app at any time. Is Cocoa hard to learn? Windows Mobile - Word is that version 7 will not be backwards compatible and losing market share. Palm WebOS - Perhaps this should go first, as it is the only one that offers tools to make my life easy as a web application developer. No competition in marketplace. But not very many users either. However, a search on StackOverflow shows a hugely disproportionate number of iPhone questions versus Blackberry. Likewise, there are clearly more apps on iPhone, so it must be getting developer love. What is the one platform I should develop for? Please back up your answer with the logic.

    Read the article

< Previous Page | 341 342 343 344 345 346 347 348 349 350 351 352  | Next Page >