Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 557/626 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • OpenGL equivalent of GDI's HatchBrush or PatternBrush?

    - by Ptah- Opener of the Mouth
    I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL. I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush. More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on. Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"? I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled. Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn? Thanks in advance for any help.

    Read the article

  • How do I store a string longer than 4000 characters in an Oracle Database using Java/JDBC?

    - by Ventrue
    I’m not sure how to use Java/JDBC to insert a very long string into an Oracle database. I have a String which is greater than 4000 characters, lets say it’s 6000. I want to take this string and store it in an Oracle database. The way to do this seems to be with the CLOB datatype. Okay, so I declared the column as description CLOB. Now, when it comes time to actually insert the data, I have a prepared statement pstmt. It looks like pstmt = conn.prepareStatement(“INSERT INTO Table VALUES(?)”). So I want to use the method pstmt.setClob(). However, I don’t know how to create a Clob object with my String in it; there’s no constructor (presumably because it can be potentially much larger than available memory). How do I put my String into a Clob? Keep in mind I’m not a very experienced programmer; please try to keep the explanations as simple as possible. Efficiency, good practices, etc. are not a concern here, I just want the absolute easiest solution. I’d like to avoid downloading other packages if it all possible; right now I’m just using JDK 1.4 and what is labelled “ojdbc14.jar”. I've looked around a bit but I haven't been able to follow any of the explanations I've found. If you have a solution that doesn’t use Clobs, I’d be open to that as well, but it has to be one column.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • How can I SETF an element in a tree by an accessor?

    - by Willi Ballenthin
    We've been using Lisp in my AI course. The assignments I've received have involved searching and generating tree-like structures. For each assignment, I've ended up writing something like: (defun initial-state () (list 0 ; score nil ; children 0 ; value 0)) ; something else and building my functions around these "states", which are really just nested lists with some loosely defined structure. To make the structure more rigid, I've tried to write accessors, such as: (defun state-score ( state ) (nth 2 state)) This works for reading the value (which should be all I need to do in a nicely functional world. However, as time crunches, and I start to madly hack, sometimes I want a mutable structure). I don't seem to be able to SETF the returned ...thing (place? value? pointer?). I get an error with something like: (setf (state-score *state*) 10) Sometimes I seem to have a little more luck writing the accessor/mutator as a macro: (defmacro state-score ( state ) `(nth 2 ,state)) However I don't know why this should be a macro, so I certainly shouldn't write it as a macro (except that sometimes it works. Programming by coincidence is bad). What is an appropriate strategy to build up such structures? More importantly, where can I learn about whats going on here (what operations affect the memory in what way)?

    Read the article

  • Failure remediation strategy for File I/O

    - by Brett
    I'm doing buffered IO into a file, both read and write. I'm using fopen(), fseeko(), standard ANSI C file I/O functions. In all cases, I'm writing to a standard local file on a disk. How often do these file I/O operations fail, and what should the strategy be for failures? I'm not exactly looking for stats, but I'm looking for a general purpose statement on how far I should go to handle error conditions. For instance, I think everyone recognizes that malloc() could and probably will fail someday on some user's machine and the developer should check for a NULL being returned, but there is no great remediation strategy since it probably means the system is out of memory. At least, this seems to be the approach taken with malloc() on desktop systems, embedded systems are different. Likewise, is it worth reattempting a file I/O operation, or should I just consider a failure to be basically unrecoverable, etc. I would appreciate some code samples demonstrating proper usage, or a library guide reference that indicates how this is to be handled. Any other data is, of course, welcome.

    Read the article

  • Accessing Members of Containing Objects from Contained Objects.

    - by Bunkai.Satori
    If I have several levels of object containment (one object defines and instantiates another object which define and instantiate another object..), is it possible to get access to upper, containing - object variables and functions, please? Example: class CObjectOne { public: CObjectOne::CObjectOne() { Create(); }; void Create(); std::vector<ObjectTwo>vObejctsTwo; int nVariableOne; } bool CObjectOne::Create() { CObjectTwo ObjectTwo(this); vObjectsTwo.push_back(ObjectTwo); } class CObjectTwo { public: CObjectTwo::CObjectTwo(CObjectOne* pObject) { pObjectOne = pObject; Create(); }; void Create(); CObjectOne* GetObjectOne(){return pObjectOne;}; std::vector<CObjectTrhee>vObjectsTrhee; CObjectOne* pObjectOne; int nVariableTwo; } bool CObjectTwo::Create() { CObjectThree ObjectThree(this); vObjectsThree.push_back(ObjectThree); } class CObjectThree { public: CObjectThree::CObjectThree(CObjectTwo* pObject) { pObjectTwo = pObject; Create(); }; void Create(); CObjectTwo* GetObjectTwo(){return pObjectTwo;}; std::vector<CObjectsFour>vObjectsFour; CObjectTwo* pObjectTwo; int nVariableThree; } bool CObjectThree::Create() { CObjectFour ObjectFour(this); vObjectsFour.push_back(ObjectFour); } main() { CObjectOne myObject1; } Say, that from within CObjectThree I need to access nVariableOne in CObjectOne. I would like to do it as follows: int nValue = vObjectThree[index].GetObjectTwo()->GetObjectOne()->nVariable1; However, after compiling and running my application, I get Memory Access Violation error. What is wrong with the code above(it is example, and might contain spelling mistakes)? Do I have to create the objects dynamically instead of statically? Is there any other way how to achieve variables stored in containing objects from withing contained objects?

    Read the article

  • Interface for reading variable length files with header and footer.

    - by John S
    I could use some hints or tips for a decent interface for reading file of special characteristics. The files in question has a header (~120 bytes), a body (1 byte - 3gb) and a footer (4 bytes). The header contains information about the body and the footer is only a simple CRC32-value of the body. I use Java so my idea was to extend the "InputStream" class and add a constructor such as "public MyInStream( InputStream in)" where I immediately read the header and the direct the overridden read()'s the body. Problem is, I can't give the user of the class the CRC32-value until the whole body has been read. Because the file can be 3gb large, putting it all in memory is a be an idea. Reading it all in to a temporary file is going to be a performance hit if there are many small files. I don't know how large the file is because the InputStream doesn't have to be a file, it could be a socket. Looking at it again, maybe extending InputStream is a bad idea. Thank you for reading the confused thoughts of a tired programmer. :)

    Read the article

  • Null Pointer Exception in Primavera P6 8.1

    - by gwrichard
    I am getting a null pointer exception in a Primavera P6 8.1 installation. The exception only occurs in one section of the web-client: Application settings.P6 web and the P6 API are deployed on the same WebLogic (10.3.5) node running on a Windows Server 2008 R2 installation. I have done this installation using this same software stack a dozen times and don't have this issue on any of the other installs. Exact error below: Match: beginTraversal Match: digest selected JREDesc: JREDesc[version 1.6.0_20+, heap=-1--1, args=null, href=http://java.sun.com/products/autodl/j2se, sel=false, null, null], JREInfo: JREInfo for index 0: platform is: 1.7 product is: 1.7.0_17 location is: http://java.sun.com/products/autodl/j2se path is: C:\Program Files (x86)\Java\jre7\bin\javaw.exe args is: null native platform is: Windows, x86 [ x86, 32bit ] JavaFX runtime is: JavaFX 2.2.7 found at C:\Program Files (x86)\Java\jre7\ enabled is: true registered is: true system is: true Match: ignoring maxHeap: -1 Match: ignoring InitHeap: -1 Match: digesting vmargs: null Match: digested vmargs: [JVMParameters: isSecure: true, args: ] Match: JVM args after accumulation: [JVMParameters: isSecure: true, args: ] Match: digest LaunchDesc: http://localhost:7001/p6/action/jnlp/appletsjnlp.jnlp?mainClass=com.primavera.pvapplets.adminpreferences.AdminPreferencesApplet&classPath=adminpreferences.jar,prm-applets-common.jar,forms-1.0.7.jar,prm-guisupport.jar,prm-to.jar,jide.jar,tablesupport.jar,formsupport.jar,applets-bo.jar,commons-lang.jar,prm-common.jar,resource_strings.jar,prm-img.jar,commons-logging.jar&name=AdminPreferences&version=8.1.2.0.0602 Match: digest properties: [] Match: JVM args: [JVMParameters: isSecure: true, args: ] Match: endTraversal .. Match: JVM args final: Match: Running JREInfo Version match: 1.7.0.17 == 1.7.0.17 Match: Running JVM args match: have:<> satisfy want:<> Java Plug-in 10.17.2.02 Using JRE version 1.7.0_17-b02 Java HotSpot(TM) Client VM User home directory = C:\Users\gwrichard ---------------------------------------------------- c: clear console window f: finalize objects on finalization queue g: garbage collect h: display this help message l: dump classloader list m: print memory usage o: trigger logging q: hide console r: reload policy configuration s: dump system and deployment properties t: dump thread list v: dump thread stack x: clear classloader cache 0-5: set trace level to <n> ----------------------------------------------------

    Read the article

  • PHP in Wordpress Posts - Is this okay?

    - by Thomas
    I've been working with some long lists of information and I've come up with a good way to post it in various formats on my wordpress blog posts. I installed the exec-PHP plugin, which allows you to run php in posts. I then created a new table (NEWTABLE) in my wordpress database and filled that table with names, scores, and other stuff. I was then able to use some pretty simple code to display the information in a wordpress post. Below is an example, but you could really do whatever you wanted. My question is - is there a problem with doing this? with security? or memory? I could just type out all the information in each post, but this is really much nicer. Any thoughts are appreciated. <?php $theResult = mysql_query("SELECT * FROM NEWTABLE WHERE Score < 100 ORDER BY LastName"); while($row = mysql_fetch_array($theResult)) { echo $row['FirstName']; echo " " . $row['LastName']; echo " " . $row['Score']; echo "<br />"; } ?>

    Read the article

  • SIGABRT on iPhone when changing xib

    - by Boz
    I've just finished off an app for the iPhone which, until today, ran fine on the iPhone simulator and actual devices. I tried changing the xib which is loaded in the applicationDidFinishLaunching method in my application delegate class - all I did was change the string in initWithNibName. When I launch the app on the simulator, the Default.png image is shown, then the app crashes with an uncaught exception. When running on a device, the Default.png image is shown for about 10 seconds, the UI is never loaded and I get 'GDB: Program received signal: "SIGABRT".' on the Xcode status bar. Debugging shows that applicationDidFinishLaunching is never actually reached before the app crashes. Setting the starting xib back to the original solves the issue, but now I've made a change and saved it in the Interface Builder and the app shows the same issues as above - I've made no code changes at all. Is this a memory issue, or a known issue of a common mistake? NOTE: I've made no code changes whatsoever, and the only changes I've made to the xib are cosmetic, the IBOutlets are all intact.

    Read the article

  • How to accommodate for the iPhone 4 screen resolution?

    - by dontWatchMyProfile
    This is a programming question! Read on before you vote to close! According to Apple, the iPhone 4 has a new and better screen resolution: 3.5-inch (diagonal) widescreen Multi-Touch display 960-by-640-pixel resolution at 326 ppi This little detail affects our apps in a heavy way. Most of the demo apps on the net have one thing in common: They position views in the believe that the screen has a fixed size of 320 x 480 pixels. So what most -if not all- developers do is: They designed everything in such a way, that a touchable area is -for example- 50 x 50 pixels big. Just enough to tap it. Things have been positioned relative to the upper left, to reach a specific position on screen - let's say the center, or somewhere at the bottom. Edit: It seems Apple has integrated an switch that allows to tell if an app is highRes or not. Nice. When we develop high-resolution apps, they probably won't work on older devices. And if they do, they would suffer a lot from 4-times the size of any image, having to scale them down in memory.

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • Segmentation in Linux : Segmentation & Paging are redundant?

    - by claws
    Hello, I'm reading "Understanding Linux Kernel". This is the snippet that explains how Linux uses Segmentation which I didn't understand. Segmentation has been included in 80 x 86 microprocessors to encourage programmers to split their applications into logically related entities, such as subroutines or global and local data areas. However, Linux uses segmentation in a very limited way. In fact, segmentation and paging are somewhat redundant, because both can be used to separate the physical address spaces of processes: segmentation can assign a different linear address space to each process, while paging can map the same linear address space into different physical address spaces. Linux prefers paging to segmentation for the following reasons: Memory management is simpler when all processes use the same segment register values that is, when they share the same set of linear addresses. One of the design objectives of Linux is portability to a wide range of architectures; RISC architectures in particular have limited support for segmentation. All Linux processes running in User Mode use the same pair of segments to address instructions and data. These segments are called user code segment and user data segment , respectively. Similarly, all Linux processes running in Kernel Mode use the same pair of segments to address instructions and data: they are called kernel code segment and kernel data segment , respectively. Table 2-3 shows the values of the Segment Descriptor fields for these four crucial segments. I'm unable to understand 1st and last paragraph.

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • Doubts about .NET Garbage Collector

    - by Smjert
    I've read some docs about the .NET Garbage Collector but i still have some doubts (examples in C#): 1)Does GC.Collect() call a partial or a full collection? 2)Does a partial collection block the execution of the "victim" application? If yes.. then i suppose this is a very "light" things to do since i'm running a game server that uses 2-3GB of memory and i "never" have execution stops (or i can't see them..). 3)I've read about GC roots but still can't understand how exactly they works. Suppose that this is the code (C#): MyClass1: [...] public List<MyClass2> classList = new List<MyClass2>(); [...] Main: main() { MyClass1 a = new MyClass1(); MyClass2 b = new MyClass2(); a.classList.Add(b); b = null; DoSomeLongWork(); } Will b ever be eligible to be garbage collected(before the DoSomeLongWork finishes)? The reference to b that classList contains, can it be considered a root? Or a root is only the first reference to the instance? (i mean, b is the root reference because the instantiation happens there).

    Read the article

  • Special technology needed for browser based chat?

    - by orokusaki
    On this post, I read about the usage of XMPP. Is this sort of thing necessary, and more importantly, my main question expanded: Can a chat server and client be built efficiently using only standard HTTP and browser technologies (such as PHP and JS, or RoR and JS, etc)? Or, is it best to stick with old protocols like XMPP find a way to integrate them with my application? I looked into CampFire via LiveHTTPHeaders and Firebug for about 5 minutes, and it appears to use Ajax to send a request which is never answered until another chat happens. Is this just CampFire opening a new thread on the server to listen for an update and then returning a response to the request when the thread hears an update? I noticed that they're requesting on a specific port (8043 if memory serves me) which makes me think that they're doing something more complex than just what I mentioned. Also, the URL requested started with /tcp/ which I found interesting. Note: I don't expect to ever have more than 150 users live-chatting in all the rooms combined at the same time. I understand that if I was building a hosted pay for chat service like CampFire with thousands of concurrent users, it would behoove me to invest time in researching special technologies vs trying to reinvent the wheel in a simple way in my app. Also, if you're going to do it with server polling, how often would you personally poll to maximize response without slamming the server?

    Read the article

  • c++ function scope

    - by Myx
    I have a main function in A.cpp which has the following relevant two lines of code: B definition(input_file); definition.Print(); In B.h I have the following relevant lines of code: class B { public: // Constructors B(void); B(const char *filename); ~B(void); // File input int ParseLSFile(const char *filename); // Debugging void Print(void); // Data int var1; double var2; vector<char* > var3; map<char*, vector<char* > > var4; } In B.cpp, I have the following function signatures (sorry for being redundant): B::B(void) : var1(-1), var2(numeric_limits<double>::infinity()) { } B::B(const char *filename) { B *def = new B(); def->ParseLSFile(filename); } B::~B(void) { // Free memory for var3 and var 4 } int B::ParseLSFile(const char *filename) { // assign var1, var2, var3, and var4 values } void B::Print(void) { // print contents of var1, var2, var3, and var4 to stdout } So when I call Print() from within B::ParseLSFile(...), then the contents of my structures print correctly to stdout. However, when I call definition.Print() from A.cpp, my structures are empty or contain garbage. Can anyone recommend the correct way to initialize/pass my structures so that I can access them outside of the scope of my function definition? Thanks.

    Read the article

  • Loading Jscript files into Firefox extension

    - by colon3l
    Hi ! Let's get directly to the problem : I'm actually doing a firefox extension in which i would like to implement the jWebsocket API in order to build a small chat. I got my main script file, named test.js, and the jWebsocket lib into a js folder. Just for you to know, this is my first firefox extension ever. So in my XUL file I got this (for the script part only of course, the interface code is not shown) : <overlay id="test-overlay" xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul"> <script type="application/x-javascript" src="chrome://test/content/test.js" /> <script type="application/x-javascript" src="chrome://test/content/js/jwebsocket.js" /> jwebsocket.js being the file I need to call according to jWebsocket website. In my main script file test.js I start with : if (jws.browserSupportsWebSockets()) { jWebSocketClient = new jws.jWebSocketJSONClient(); } else { var lMsg = jws.MSG_WS_NOT_SUPPORTED; alert(lMsg); } jws being the namespace created into the jwebsocket.js file. Of course I've got the required stand-alone server running in background, and working. So from what I understood looking on various websites, is that if a js file is loaded into the javascript allocated memory space (with the tag), all namespace/function should be available between each file. But this was mostly for HTML-oriented issues, so I'm not sure if it applies to XUL/Firefox environment. But the script keep failing at the first jws call. Any ideas on what goes wrong here ? I'm stuck for 2 days now :/

    Read the article

  • How can I improve the performance of this double-for print?

    - by Florenc
    I have the following static method that prints the data imported from a 40.000 lines .xls spreadsheet. Now, it takes about 27 seconds to print the data in the console and the memory consumption is huge. import org.apache.poi.hssf.usermodel.*; import org.apache.poi.ss.usermodel.*; public static void printSheetData(List<List<HSSFCell>> sheetData) { for (int i = 0; i < sheetData.size(); i++) { List<HSSFCell> list = (List<HSSFCell>) sheetData.get(i); for (int j = 0; j < list.size(); j++) { HSSFCell cell = (HSSFCell) list.get(j); System.out.print(cell.toString()); if (j < list.size() - 1) { System.out.print(", "); } } System.out.println(""); } } Disclaimer: I know, I know large data belong to a database, don't print output in the console, premature optimization is the root of all evils...

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • How memset initializes an array of integers by -1?

    - by haccks
    The manpage says about memset: #include <string.h> void *memset(void *s, int c, size_t n) The memset() function fills the first n bytes of the memory area pointed to by s with the constant byte c. It is clear that memset can't be used to initialize int array as shown below: int a[10]; memset(a, 1, sizeof(a)); it is because int is represented by 4 bytes (say) and one can not get the desired value for the integers in array a. But I often see the programmers use memset to set the int array elements to either 0 or -1. int a[10]; int b[10]; memset(a, 0, sizeof(a)); memset(b, -1, sizeof(b)); As per my understanding, initializing with integer 0 is OK because 0 can be represented in 1 byte (may be I am wrong in this context). But how it is possible to initialize b with -1 (a 4 bytes value)?

    Read the article

  • Passing in a lambda to a Where statement

    - by sonicblis
    I noticed today that if I do this: var items = context.items.Where(i => i.Property < 2); items = items.Where(i => i.Property > 4); Once I access the items var, it executes only the first line as the data call and then does the second call in memory. However, if I do this: var items = context.items.Where(i => i.Property < 2).Where(i => i.Property > 4); I get only one expression executed against the context that includes both where statements. I have a host of variables that I want to use to build the expression for the linq lambda, but their presence or absence changes the expression such that I'd have to have a rediculous number of conditionals to satisfy all cases. I thought I could just add the Where() statements as in my first example above, but that doesn't end up in a single expression that contains all of the criteria. Therefore, I'm trying to create just the lambda itself as such: //bogus syntax if (var1 == "something") var expression = Expression<Func<item, bool>>(i => i.Property == "Something); if (var2 == "somethingElse") expression = expression.Where(i => i.Property2 == "SomethingElse"); And then pass that in to the where of my context.Items to evaluate. A) is this right, and B) if so, how do you do it?

    Read the article

  • Associating an object with another object for GC clearup

    - by thecoop
    Is there any way of associating an object instance (object A) with a second object (object B) in a generalised way, so that when B gets collected A becomes eligable for collection? The same behaviour that would happen if B had an instance variable pointing to A, but without explicitly changing the class definition of B, and being able to do this in a dynamic way? The same sort of effect could be done by using the Component.Disposed event in a funky way, but I don't want to make B disposable EDIT I'm basically creating a cache of objects that are associated with a single 'root' object, and I don't want the cache to be static, as there can be lots of root objects using different caches, so lots of memory will be used up when a root object is no longer used but the cached objects are still around. So, I want a collection of cached objects to be associated with each 'root' object, without changing the root object definition. Sort of like metadata of an extra object reference attached to each root object instance. That way, each collection will get collected when the root object is collected, and not hang around like they would if a static cache was used.

    Read the article

  • Please critique this method

    - by Jakob
    Hi I've been looking around the net for some tab button close functionality, but all those solutions had some complicated eventhandler, and i wanted to try and keep it simple, but I might have broken good code ethics doing so, so please review this method and tell me what is wrong. public void AddCloseItem(string header, object content){ //Create tabitem with header and content StackPanel headerPanel = new StackPanel() { Orientation = Orientation.Horizontal, Height = 14}; headerPanel.Children.Add(new TextBlock() { Text = header }); Button closeBtn = new Button() { Content = new Image() { Source = new BitmapImage(new Uri("images/cross.png", UriKind.Relative)) }, Margin = new Thickness() { Left = 10 } }; headerPanel.Children.Add(closeBtn); TabItem newTabItem = new TabItem() { Header = headerPanel, Content = content }; //Add close button functionality closeBtn.Tag = newTabItem; closeBtn.Click += new RoutedEventHandler(closeBtn_Click); //Add item to list this.Add(newTabItem); } void closeBtn_Click(object sender, RoutedEventArgs e) { this.Remove((TabItem)((Button)sender).Tag); } So what I'm doing is storing the tabitem in the btn.Tag property, and then when the button is clicked i just remove the tabitem from my observablecollection, and the UI is updated appropriately. Am I using too much memory saving the tabitem to the Tag property?

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >