Search Results

Search found 9889 results on 396 pages for 'pointer speed'.

Page 181/396 | < Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >

  • Effects of changing a node in a binary tree

    - by eSKay
    Suppose I want to change the orange node in the following tree. So, the only other change I'll need to make is in the left pointer of the green node. The blue node will remain the same. Am I wrong somewhere? Because according to this article (that explains zippers), even the blue node needs to be changed. Similarly, in this picture (recolored) from the same article, why do we change the orange nodes at all (when we change x)?

    Read the article

  • UITextView insert text in the textview text

    - by John Smith
    I want to have to occasionally insert text into the UITextView text object. For example, if the user presses the "New Paragraph" button I would like to insert a double newline instead of just the standard single newline. How can I go about such? Do i have to read the string from UITextView, mutate it, and write it back? Then how would I know where the pointer was? Thanks

    Read the article

  • Use native HBitmap in C# while preserving alpha channel/transparency. Please check this code, it works on my computer...

    - by David
    Let's say I get a HBITMAP object/handle from a native Windows function. I can convert it to a managed bitmap using Bitmap.FromHbitmap(nativeHBitmap), but if the native image has transparency information (alpha channel), it is lost by this conversion. There are a few questions on Stack Overflow regarding this issue. Using information from the first answer of this question (How to draw ARGB bitmap using GDI+?), I wrote a piece of code that I've tried and it works. It basically gets the native HBitmap width, height and the pointer to the location of the pixel data using GetObject and the BITMAP structure, and then calls the managed Bitmap constructor: Bitmap managedBitmap = new Bitmap(bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); As I understand (please correct me if I'm wrong), this does not copy the actual pixel data from the native HBitmap to the managed bitmap, it simply points the managed bitmap to the pixel data from the native HBitmap. And I don't draw the bitmap here on another Graphics (DC) or on another bitmap, to avoid unnecessary memory copying, especially for large bitmaps. I can simply assign this bitmap to a PictureBox control or the the Form BackgroundImage property. And it works, the bitmap is displayed correctly, using transparency. When I no longer use the bitmap, I make sure the BackgroundImage property is no longer pointing to the bitmap, and I dispose both the managed bitmap and the native HBitmap. The Question: Can you tell me if this reasoning and code seems correct. I hope I will not get some unexpected behaviors or errors. And I hope I'm freeing all the memory and objects correctly. private void Example() { IntPtr nativeHBitmap = IntPtr.Zero; /* Get the native HBitmap object from a Windows function here */ // Create the BITMAP structure and get info from our nativeHBitmap NativeMethods.BITMAP bitmapStruct = new NativeMethods.BITMAP(); NativeMethods.GetObjectBitmap(nativeHBitmap, Marshal.SizeOf(bitmapStruct), ref bitmapStruct); // Create the managed bitmap using the pointer to the pixel data of the native HBitmap Bitmap managedBitmap = new Bitmap( bitmapStruct.bmWidth, bitmapStruct.bmHeight, bitmapStruct.bmWidth * 4, PixelFormat.Format32bppArgb, bitmapStruct.bmBits); // Show the bitmap this.BackgroundImage = managedBitmap; /* Run the program, use the image */ MessageBox.Show("running..."); // When the image is no longer needed, dispose both the managed Bitmap object and the native HBitmap this.BackgroundImage = null; managedBitmap.Dispose(); NativeMethods.DeleteObject(nativeHBitmap); } internal static class NativeMethods { [StructLayout(LayoutKind.Sequential)] public struct BITMAP { public int bmType; public int bmWidth; public int bmHeight; public int bmWidthBytes; public ushort bmPlanes; public ushort bmBitsPixel; public IntPtr bmBits; } [DllImport("gdi32", CharSet = CharSet.Auto, EntryPoint = "GetObject")] public static extern int GetObjectBitmap(IntPtr hObject, int nCount, ref BITMAP lpObject); [DllImport("gdi32.dll")] internal static extern bool DeleteObject(IntPtr hObject); }

    Read the article

  • agility of asp.net MVC

    - by Hellnar
    Hello I am wondering how agile (fast to develop yet stable) asp.net MVC compared to frameworks using dynamic languages such as Django or Ruby on Rails. I will be happy if you share you experiance in the name of development speed (assuming each language/framework is known at a similar level) Things I love about Django Fast model design thanks to ORM Good template system Not too hard to deploy Easy to extend Lots of free apps to plugin and great documentation Thanks

    Read the article

  • Leverage browser caching

    - by Daniel
    I have a website and when I check page speed with Google plug-in, I receive: Leverage browser caching The following resources are missing a cache expiration Where can I change the settings for this?

    Read the article

  • Python networkx DFS or BFS missing?

    - by sadawd
    Dear Everyone I am interested in finding a path (not necessarily shortest) in a short amount of time. Dijsktra and AStar in networkx is taking too long. Why is there no DFS or BFS in networkx? I plan to write my own DFS and BFS search (I am leaning more towards BFS because my graph is pretty deep). Is there anything that I can use in networkx's lib to speed me up? Thx

    Read the article

  • How can I duplicate the UI functionality of Google Fastflip?

    - by marcamillion
    Trying to get the same functionality as Google FastFlip. Not just with the thumbnails, but even for the larger images too. Notice how when you click on a story, you see the main story front and center, and you see smaller half-screens of previous and next stories. How can I get that functionality? Is there a plugin for one of the JS frameworks that does this already? I also love the speed and feel of flipping through the images.

    Read the article

  • Recommendation for screen video capture for demos

    - by Simon
    I am just in the process of making some little demo tutorial videos for my app to help users get up to speed. I have used Camtasia in the past but don't really like it. Anyone have any recommendations. Good is more important than free, but free certainly helps. I can use Windows or Mac for this job.

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • Advice on logic circuits and serial communications

    - by Spencer Ruport
    As far as I understand the serial port so far, transferring data is done over pin 3. As shown here: There are two things that make me uncomfortable about this. The first is that it seems to imply that the two connected devices agree on a signal speed and the second is that even if they are configured to run at the same speed you run into possible synchronization issues... right? Such things can be handled I suppose but it seems like there must be a simpler method. What seems like a better approach to me would be to have one of the serial port pins send a pulse that indicates that the next bit is ready to be stored. So if we're hooking these pins up to a shift register we basically have: (some pulse pin)-clk, tx-d Is this a common practice? Is there some reason not to do this? EDIT Mike shouldn't have deleted his answer. This I2C (2 pin serial) approach seems fairly close to what I did. The serial port doesn't have a clock you're right nobugz but that's basically what I've done. See here: private void SendBytes(byte[] data) { int baudRate = 0; int byteToSend = 0; int bitToSend = 0; byte bitmask = 0; byte[] trigger = new byte[1]; trigger[0] = 0; SerialPort p; try { p = new SerialPort(cmbPorts.Text); } catch { return; } if (!int.TryParse(txtBaudRate.Text, out baudRate)) return; if (baudRate < 100) return; p.BaudRate = baudRate; for (int index = 0; index < data.Length * 8; index++) { byteToSend = (int)(index / 8); bitToSend = index - (byteToSend * 8); bitmask = (byte)System.Math.Pow(2, bitToSend); p.Open(); p.Parity = Parity.Space; p.RtsEnable = (byte)(data[byteToSend] & bitmask) > 0; s = p.BaseStream; s.WriteByte(trigger[0]); p.Close(); } } Before anyone tells me how ugly this is or how I'm destroying my transfer speeds my quick answer is I don't care about that. My point is this seems much much simpler than the method you described in your answer nobugz. And it wouldn't be as ugly if the .Net SerialPort class gave me more control over the pin signals. Are there other serial port APIs that do?

    Read the article

  • What is a columnar database?

    - by Raj More
    I have been working with warehousing for a while now. I am intrigued by Columnar Databases and the speed that they have to offer for data retrievals. I have multi-part question: How do Columnar Databases work? How do they differ from relational databases? Is there a trial version of a columnar database I can install to play around? (I am on Windows 7)

    Read the article

  • Any other kinds of "Task Queue" APIs ?

    - by sork
    I'm curious if it's common practice outside of the GAE platform to be able to defer tasks to background workers via webhooks. I find it particularly useful to speed up the front-end of webapps, by delegating any long process to background tasks. I'd like to hear about open source software allowing to implement a TaskQueue-like API, with webhooks preferably, if anyone has some experience in this area. Thanks!

    Read the article

  • Implement custom JTA XAResource for using with hibernate

    - by jstingo
    I have two level access to database: the first with Hibernate, the second with JDBC. The JDBC level work with nontransactional tables (I use MyISAM for speed). I want make both levels works within transaction. I read about JTA which can manage distributed transactions. But there is lack information in the internet about how to implement and use custom resource. Does any one have experience with using custom XAResources?

    Read the article

  • What's the best way to store a MySQL database in source control?

    - by Marplesoft
    I am working on an application with a few other people and we'd like to store our MySQL database in source control. My thoughts are two have two files: one would be the create script for the tables, etc, and the other would be the inserts for our sample data. Is this a good approach? Also, what's the best way to export this information? Also, any suggestions for workflow in terms of ways to speed up the process of making changes, exporting, updating, etc.

    Read the article

  • C# Hide Resize Cursor

    - by Ozzy
    In my program, im using the WndProc override to stop my form being resized. Thing is, the cursor is still there when you move the pointer to the edge of the form. Is there anyway to hide this cursor?

    Read the article

  • Flex barChart and XML Data

    - by theband
    <Projectlist> <Project> <ProjectName>Alcoswitch - ToggleSwitches </ProjectName> <ProjectStatusname>Planning</ProjectStatusname> </Project> <Project> <ProjectName> Transverse Wedge</ProjectName> <ProjectStatusname>Canceled</ProjectStatusname> </Project> <Project> <ProjectName>High Speed Pluggable I/O</ProjectName> <ProjectStatusname>In-Progress</ProjectStatusname> </Project> <Project> <ProjectName>"High Speed Pluggable I/O - Product Breakouts:</ProjectName> <ProjectStatusname>In-Progress</ProjectStatusname> </Project> <Project> <ProjectName>Circular Plastic Connector (CPC)</ProjectName> <ProjectStatusname>In-Progress</ProjectStatusname> </Project> </Projectlist> This is my XML data i am recieving, how can i show this in a bar chart. <mx:BarChart id="barChart" showDataTips="true" dataProvider="{ProjectStateInfo}" width="100%" height="100%"> <mx:horizontalAxis> <mx:CategoryAxis categoryField="ProjectStatusname"/> </mx:horizontalAxis> <mx:verticalAxis> <mx:CategoryAxis categoryField="ProjectName"/> </mx:verticalAxis> <mx:series> <mx:BarSeries id="barSeries" visible="true" yField="ProjectName" xField="ProjectStatusname" displayName="ProjectStatusname" /> </mx:series> </mx:BarChart> My X-Axis shows muliple values of In-Progress, but i just need one. Is it possible to represent such relationship using BarChart. Any other Flex chart is Advisable.

    Read the article

  • iPhone Frameworks

    - by Kevin
    What is a strong iPhone framework to start out developing with, besides the SDK from Apple? Are there any that exist to speed up development time?

    Read the article

  • How to configure a Firebird Database to run in memory

    - by Robert
    I'm running a software called Fishbowl inventory and it is running on a firebird database (Windows server 2003) at this time the fishbowl software is running extremely slow when more then one user accesses the software. I'm thinking I maybe able to speed up the application by forcing the database to run "In Memory". However I can not find documentation on how to do this. Any help would be greatly appreciated. Thank you in advance. Robert

    Read the article

  • Floating point vs integer calculations on modern hardware

    - by maxpenguin
    I am doing some performance critical work in C++, and we are currently using integer calculations for problems that are inherently floating point because "its faster". This causes a whole lot of annoying problems and adds a lot of annoying code. Now, I remember reading about how floating point calculations were so slow approximately circa the 386 days, where I believe (IIRC) that there was an optional co-proccessor. But surely nowadays with exponentially more complex and powerful CPUs it makes no difference in "speed" if doing floating point or integer calculation? Especially since the actual calculation time is tiny compared to something like causing a pipeline stall or fetching something from main memory? I know the correct answer is to benchmark on the target hardware, what would be a good way to test this? I wrote two tiny C++ programs and compared their run time with "time" on Linux, but the actual run time is too variable (doesn't help I am running on a virtual server). Short of spending my entire day running hundreds of benchmarks, making graphs etc. is there something I can do to get a reasonable test of the relative speed? Any ideas or thoughts? Am I completely wrong? The programs I used as follows, they are not identical by any means: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { int accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += rand( ) % 365; } std::cout << accum << std::endl; return 0; } Program 2: #include <iostream> #include <cmath> #include <cstdlib> #include <time.h> int main( int argc, char** argv ) { float accum = 0; srand( time( NULL ) ); for( unsigned int i = 0; i < 100000000; ++i ) { accum += (float)( rand( ) % 365 ); } std::cout << accum << std::endl; return 0; } Thanks in advance!

    Read the article

< Previous Page | 177 178 179 180 181 182 183 184 185 186 187 188  | Next Page >