Search Results

Search found 12267 results on 491 pages for 'memory'.

Page 241/491 | < Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >

  • In Seam what's the difference between injected EntityManager and getEntityManager from EntityHome

    - by Navi
    I am testing a Seam application using the needle test API. In my code I am using the getEntityManager() method from EntityHome. When I run the unit tests against an in memory database I get the following exception: java.lang.IllegalStateException: No application context active at org.jboss.seam.Component.forName(Component.java:1945) at org.jboss.seam.Component.getInstance(Component.java:2005) at org.jboss.seam.Component.getInstance(Component.java:1983) at org.jboss.seam.Component.getInstance(Component.java:1977) at org.jboss.seam.Component.getInstance(Component.java:1972) at org.jboss.seam.framework.Controller.getComponentInstance(Controller.java:272) at org.jboss.seam.framework.PersistenceController.getPersistenceContext(PersistenceController.java:20) at org.jboss.seam.framework.EntityHome.getEntityManager(EntityHome.java:177) etc .. I can resolve some of these errors by injecting the EntityManager with @In EntityManager entityManager; Unfortunately the persist method of EntityHome also calls the getEntityManager. This means a lot of mocks or rewriting the code somehow. Is there any workaround and why is this exception thrown anyway? I am using Seam 2.2.0 GA by the way. There is nothing special about the components. They are generated by seam-gen. The test is performed with in memory database - I followed the examples in http://jbosscc-needle.sourceforge.net/jbosscc-needle/1.0/db-util.html.

    Read the article

  • Java Collections and Garbage Collector

    - by Anth0
    A little question regarding performance in a Java web app. Let's assume I have a List<Rubrique> listRubriques with ten Rubrique objects. A Rubrique contains one list of products (List<product> listProducts) and one list of clients (List<Client> listClients). What exactly happens in memory if I do this: listRubriques.clear(); listRubriques = null; My point of view would be that, since listRubriques is empty, all my objects previously referenced by this list (including listProducts and listClients) will be garbage collected pretty soon. But since Collection in Java are a little bit tricky and since I have quite performance issues with my app i'm asking the question :) edit : let's assume now that my Client object contains a List<Client>. Therefore, I have kind of a circular reference between my objects. What would happen then if my listRubrique is set to null? This time, my point of view would be that my Client objects will become "unreachable" and might create a memory leak?

    Read the article

  • Which technology is best suited to store and query a huge readonly graph?

    - by asmaier
    I have a huge directed graph: It consists of 1.6 million nodes and 30 million edges. I want the users to be able to find all the shortest connections (including incoming and outgoing edges) between two nodes of the graph (via a web interface). At the moment I have stored the graph in a PostgreSQL database. But that solution is not very efficient and elegant, I basically need to store all the edges of the graph twice (see my question PostgreSQL: How to optimize my database for storing and querying a huge graph). It was suggested to me to use a GraphDB like neo4j or AllegroGraph. However the free version of AllegroGraph is limited to 50 million nodes and also has a very high-level API (RDF), which seems too powerful and complex for my problem. Neo4j on the other hand has only a very low level API (and the python interface is not mature yet). Both of them seem to be more suited for problems, where nodes and edges are frequently added or removed to a graph. For a simple search on a graph, these GraphDBs seem to be too complex. One idea I had would be to "misuse" a search engine like Lucene for the job, since I'm basically only searching connections in a graph. Another idea would be, to have a server process, storing the whole graph (500MB to 1GB) in memory. The clients could then query the server process and could transverse the graph very quickly, since the graph is stored in memory. Is there an easy possibility to write such a server (preferably in Python) using some existing framework? Which technology would you use to store and query such a huge readonly graph?

    Read the article

  • Entity Framework query

    - by carter-boater
    Hi all, I have a piece of code that I don't know how to improve it. I have two entities: EntityP and EntityC. EntityP is the parent of EntityC. It is 1 to many relationship. EntityP has a property depending on a property of all its attached EntityC. I need to load a list of EntityP with the property set correctly. So I wrote a piece of code to get the EntityP List first.It's called entityP_List. Then as I wrote below, I loop through the entityP_List and for each of them, I query the database with a "any" function which will eventually be translated to "NOT EXIST" sql query. The reason I use this is that I don't want to load all the attached entityC from database to memory, because I only need the aggregation value of their property. But the problem here is, the looping will query the databae many times, for each EntityP! So I am wondering if anybody can help me improve the code to query the database only once to get all the EntityP.IsAll_C_Complete set, without load EntityC to memory. foreach(EntityP p in entityP_List) { isAnyNotComoplete = entities.entityC.Any(c => c.IsComplete==false && c.parent.ID == p.ID); p.IsAll_C_Complete = !isAnyNotComoplete; } Thank you very much!

    Read the article

  • Convert function to read from string instead of file in C

    - by Dusty
    I've been tasked with updating a function which currently reads in a configuration file from disk and populates a structure: static int LoadFromFile(FILE *Stream, ConfigStructure *cs) { int tempInt; ... if ( fscanf( Stream, "Version: %d\n",&tempInt) != 1 ) { printf("Unable to read version number\n"); return 0; } cs->Version = tempInt; ... } to one which allows us to bypass writing the configuration to disk and instead pass it directly in memory, roughly equivalent to this: static int LoadFromString(char *Stream, ConfigStructure *cs) A few things to note: The current LoadFromFile function is incredibly dense and complex, reading dozens of versions of the config file in a backward compatible manner, which makes duplication of the overall logic quite a pain. The functions that generate the config file and those that read it originate in totally different parts of the old system and therefore don't share any data structures so I can't pass those directly. I could potentially write a wrapper, but again, it would need to handle any structure passed in in a backwards compatible manner. I'm tempted to just pass the file as is in as a string (as in the prototype above) and convert all the fscanf's to sscanf's but then I have to handle incrementing the pointer along (and potentially dealing with buffer overrun errors) manually. This has to remain in C, so no C++ functionality like streams can help here Am I missing a better option? Is there some way to create a FILE * that actually just points to a location in memory instead of on disk? Any pointers, suggestions or other help is greatly appreciated.

    Read the article

  • Why is it a bad practice to call System.gc?

    - by zneak
    After answering to a question about how to force-free objects in Java (the guy was clearing a 1.5GB HashMap) with System.gc(), I've been told it's a bad practice to call System.gc() manually, but the comments seemed mitigated about it. So much that no one dared to upvote it, nor downvote it. I've been told there it's a bad practice, but then I've also been told garbage collector runs don't systematically stop the world anymore, and that it could also be only seen as a hint, so I'm kind of at loss. I do understand that usually the JVM knows better than you when it needs to reclaim memory. I also understand that worrying about a few kilobytes of data is silly. And I also understand that even megabytes of data isn't what it was a few years back. But still, 1.5 gigabyte? And you know there's like 1.5 GB of data hanging around in memory; it's not like it's a shot in the dark. Is System.gc() systematically bad, or is there some point at which it becomes okay? So the question is actually double: Why is it or not a bad practice to call System.gc()? Is it really a hint under certain implementations, or is it always a full collection cycle? Are there really garbage collector implementations that can do their work without stopping the world? Please shed some light over the various assertions people have made. Where's the threshold? Is it never a good idea to call System.gc(), or are there times when it's acceptable? If any, what are those times?

    Read the article

  • Why is my multithreaded Java program not maxing out all my cores on my machine?

    - by James B
    Hi, I have a program that starts up and creates an in-memory data model and then creates a (command-line-specified) number of threads to run several string checking algorithms against an input set and that data model. The work is divided amongst the threads along the input set of strings, and then each thread iterates the same in-memory data model instance (which is never updated again, so there are no synchronization issues). I'm running this on a Windows 2003 64-bit server with 2 quadcore processors, and from looking at Windows task Manager they aren't being maxed-out, (nor are they looking like they are being particularly taxed) when I run with 10 threads. Is this normal behaviour? It appears that 7 threads all complete a similar amount of work in a similar amount of time, so would you recommend running with 7 threads instead? Should I run it with more threads?...Although I assume this could be detrimental as the JVM will do more context switching between the threads. Alternatively, should I run it with fewer threads? Alternatively, what would be the best tool I could use to measure this?...Would a profiling tool help me out here - indeed, is one of the several profilers better at detecting bottlenecks (assuming I have one here) than the rest? Note, the server is also running SQL Server 2005 (this may or may not be relevant), but nothing much is happening on that database when I am running my program. Note also, the threads are only doing string matching, they aren't doing any I/O or database work or anything else they may need to wait on. Thanks in advance, -James

    Read the article

  • Backwards compatibility when using Core Data

    - by Alex
    Could anybody shed some light as to why is my app crashing with the following error on iPhone OS 2.2.1 dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation I have weak linked CoreData.framework, and have the Base SDK set to 3.0 and Deployment Target set to SDK 2.2 The app already uses other 3.0 features when available and I did not have any problems with those. But apparently the backward-compatibility methods used for other features do not work with Core Data. The app crashes before app delegate's applicationDidFinishLaunching gets called. Here's the debugger log: [Session started at 2010-05-25 20:17:03 -0400.] GNU gdb 6.3.50-20050815 (Apple version gdb-1119) (Thu May 14 05:35:37 UTC 2009) Copyright 2004 Free Software Foundation, Inc. GDB is free software, covered by the GNU General Public License, and you are welcome to change it and/or distribute copies of it under certain conditions. Type "show copying" to see the conditions. There is absolutely no warranty for GDB. Type "show warranty" for details. This GDB was configured as "--host=i386-apple-darwin --target=arm-apple-darwin".tty /dev/ttys001 Loading program into debugger… sharedlibrary apply-load-rules all warning: Unable to read symbols from "MessageUI" (not yet mapped into memory). warning: Unable to read symbols from "CoreData" (not yet mapped into memory). Program loaded. target remote-mobile /tmp/.XcodeGDBRemote-12038-42 Switching to remote-macosx protocol mem 0x1000 0x3fffffff cache mem 0x40000000 0xffffffff none mem 0x00000000 0x0fff none run Running… [Switching to thread 10755] [Switching to thread 10755] Re-enabling shared library breakpoint 1 Re-enabling shared library breakpoint 2 Re-enabling shared library breakpoint 3 Re-enabling shared library breakpoint 4 Re-enabling shared library breakpoint 5 (gdb) continue warning: Unable to read symbols for ""/Users/alex/iPhone Projects/AppName/build/Debug-iphoneos"/AppName.app/AppName" (file not found). dyld: Symbol not found: _OBJC_CLASS_$_NSPredicate Referenced from: /var/mobile/Applications/456F243F-468A-4969-9BB7-A4DF993AE89C/AppName.app/AppName Expected in: /System/Library/Frameworks/Foundation.framework/Foundation (gdb)

    Read the article

  • Javascript working in chrome but not in explorer

    - by Greg
    Hello, I am writing this code in html: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <script language="javascript" type="text/javascript"> function setVisibility(id, visibility) { document.getElementById(id).style.display = visibility; } </script> <title>Welcome to the memory game</title> </head> <body> <h1>Welcome to the memory game!</h1> <input type="button" name="type" value='Show Layer' onclick="setVisibility('sub3', 'inline');"/> <input type="button" name="type" value='Hide Layer' onclick="setVisibility('sub3', 'none');"/> <div id="sub3">Message Box</div> </body> </html> It suppose to make the "div" disappear and reapper, but it works in chrome and not in explorer. Anyone has any idea how can I make it work in explorer (I tried allowing blocked content when that message about activeX appears in explorer)? Thanks, Greg

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • What specific features of LabView are frustrating to you?

    - by Underflow
    Please bear with me: this isn't a language debate or a flame. It's a real request for opinions. Occasionally, I have to help educate a traditional text coder in how to think in LabVIEW (LV). Often during this process, I get to hear about how LV sucks. Rarely is this insight accompanied by rational observations other than "Language X is just so much better!". While this statement is satisfying to them, it doesn't help me understand what is frustrating them. So, for those of you with LabVIEW and text language experience, what specific things about LV drive you nuts? ------ Summaries ------- Thanks for all the answers! Some of the issues are answered in the comments below, some exist on other sites, and some are just genuine problems with LV. In the spirit of the original question, I'm not going to try to answer all of these here: check LAVA or NI's website, and you'll be pleasantly surprised at how many of these things can be overcome. Unintentional concurrency No access to tradition text manipulation tools Binary-only source code control Difficult to branch and merge Too many open windows Text has cleaner/clearer/more expressive syntax Clean coding requires a lot of time and manipulation Large, difficult to access API/palette system Mouse required File namespacing: no duplicate files with the same name in memory LV objects are natively by-value only Requires dev environment to view code Lack of zoom Slow startup Memory pig "Giant" code is difficult to work with UI lockup is easy to do Trackpads and LV don't mix well String manipulation is graphically bloated Limited UI customization "Hidden" primitives (yes, these exist) Lack of official metaprogramming capability (not for much longer, though) Lack of unicode support [1]: http://www.lavag.org LAVA

    Read the article

  • asynchronous pages

    - by lockedscope
    I have just read the multi-threading and custom threading in asp.net articles. http://www.williablog.net/williablog/post/2008/12/16/Custom-Threading-in-ASPNET.aspx http://www.williablog.net/williablog/post/2008/12/16/Multi-Threading-in-ASPNET.aspx I have couple of questions. What does he mean by returning a thread to the pool? Is that thread completely removed from memory or put in to a state that it does not scheduled to CPU(is it in sleep state or whatever)? If that thread is removed from memory how could it survive after async point? How this mechanism works? Are every objects(pages class, request,response etc.) are copied to somewhere else before they are disposed? (Or, is it just waiting in a sleep state and then its waked when async call ends?) He is saying that; "Having said that, making pages asynchronous is not really about improving performance, it is about improving scalability" then he is saying; "I'm sorry to say that it will do nothing for scalability or performance." So which one is true? or for which case(s) are they true?

    Read the article

  • Proper use of the IDisposable interface

    - by cwick
    I know from reading the MSDN documentation that the "primary" use of the IDisposable interface is to clean up unmanaged resources http://msdn.microsoft.com/en-us/library/system.idisposable.aspx. To me, "unmanaged" means things like database connections, sockets, window handles, etc. But, I've seen code where the Dispose method is implemented to free managed resources, which seems redundant to me, since the garbage collector should take care of that for you. For example: public class MyCollection : IDisposable { private List<String> _theList = new List<String>(); private Dictionary<String, Point> _theDict = new Dictionary<String, Point>(); // Die, you gravy sucking pig dog! public void Dispose() { _theList.clear(); _theDict.clear(); _theList = null; _theDict = null; } My question is, does this make the garbage collector free memory used by MyCollection any faster than it normally would? edit: So far people have posted some good examples of using IDisposable to clean up unmanaged resources such as database connections and bitmaps. But suppose that _theList in the above code contained a million strings, and you wanted to free that memory now, rather than waiting for the garbage collector. Would the above code accomplish that?

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • Automation Error when exporting Excel data to SQL Server

    - by brohjoe
    I'm getting an Automation error upon running VBA code in Excel 2007. I'm attempting to connect to a remote SQL Server DB and load data to from Excel to SQL Server. The error I get is, "Run-time error '-2147217843(80040e4d)': Automation error". I checked out the MSDN site and it suggested that this may be due to a bug associated with the sqloledb provider and one way to mitigate this is to use ODBC. Well I changed the connection string to reflect ODBC provider and associated parameters and I'm still getting the same error. Here is the code with ODBC as the provider: Dim cnt As ADODB.Connection Dim rst As ADODB.Recordset Dim stSQL As String Dim wbBook As Workbook Dim wsSheet As Worksheet Dim rnStart As Range Public Sub loadData() 'This was set up using Microsoft ActiveX Data Components version 6.0. 'Create ADODB connection object, open connection and construct the connection string object. Set cnt = New ADODB.Connection cnt.ConnectionString = "Driver={SQL Server}; Server=onlineSQLServer2010.foo.com; Database=fooDB;Uid=logonalready;Pwd='helpmeOB1';" cnt.Open On Error GoTo ErrorHandler 'Open Excel and run query to export data to SQL Server. strSQL = "SELECT * INTO SalesOrders FROM OPENDATASOURCE('Microsoft.ACE.OLEDB.12.0'," & _ "'Data Source=C:\Database.xlsx; Extended Properties=Excel 12.0')...[SalesOrders$]" cnt.Execute (strSQL) 'Error handling. ErrorExit: 'Reclaim memory from the connection objects Set rst = Nothing Set cnt = Nothing Exit Sub ErrorHandler: MsgBox Err.Description, vbCritical Resume ErrorExit 'clean up and reclaim memory resources. If CBool(cnt.State And adStateOpen) Then Set rst = Nothing Set cnt = Nothing cnt.Close End If End Sub

    Read the article

  • EPIPE blocks server

    - by timn
    I have written a single-threaded asynchronous server in C running on Linux: The socket is non-blocking and as for polling, I am using epoll. Benchmarks show that the server performs fine and according to Valgrind, there are no memory leaks or other problems. The only problem is that when a write() command is interrupted (because the client closed the connection), the server will encounter a SIGPIPE. I am doing the interrupted artifically by running the benchmarking utility "siege" with the parameter -b. It does lots of requests in a row which all work perfectly. Now I press CTRL-C and restart the "siege". Sometimes I am lucky and the server does not manage to send the full response because the client's fd is invalid. As expected errno is set to EPIPE. I handle this situation, execute close() on the fd and then free the memory related to the connection. Now the problem is that the server blocks and does not answer properly anymore. Here is the strace output: accept(3, {sa_family=AF_INET, sin_port=htons(50611), sin_addr=inet_addr("127.0.0.1")}, [16]) = 5 fcntl64(5, F_GETFD) = 0 fcntl64(5, F_SETFL, O_RDONLY|O_NONBLOCK) = 0 epoll_ctl(4, EPOLL_CTL_ADD, 5, {EPOLLIN|EPOLLERR|EPOLLHUP|EPOLLET, {u32=158310248, u64=158310248}}) = 0 epoll_wait(4, {{EPOLLIN, {u32=158310248, u64=158310248}}}, 128, -1) = 1 read(5, "GET /user/register HTTP/1.1\r\nHos"..., 4096) = 161 write(5, "HTTP/1.1 200 OK\r\nContent-Type: t"..., 106) = 106 <<<<< write(5, "00001000\r\n", 10) = -1 EPIPE (Broken pipe) <<<<< Why did the previous write() work fine but not this one? --- SIGPIPE (Broken pipe) @ 0 (0) --- As you can see, the client establishes a new connection which consequently is accepted. Then, it's added to the EPOLL queue. epoll_wait() signalises that the client sent data (EPOLLIN). The request is parsed and and a response is composed. Sending the headers works fine but when it comes to the body, write() results in an EPIPE. It is not a bug in "siege" because it blocks any incoming connections, no matter from which client.

    Read the article

  • Jmap can't connect to to make a dump

    - by Jasper Floor
    We have an open beta of an app which occasionally causes the heapspace to overflow. The JVM reacts by going on a permanent vacation. To analyze this I would like to peek into the memory at the point where it failed. Java does not want me to do this. The process is still in memory but it doesn't seem to be recognized as a java process. The server in question is a debian Lenny server, Java 6u14 /opt/jdk/bin# ./jmap -F -dump:format=b,file=/tmp/apidump.hprof 11175 Attaching to process ID 11175, please wait... sun.jvm.hotspot.debugger.NoSuchSymbolException: Could not find symbol "gHotSpotVMTypeEntryTypeNameOffset" in any of the known library names (libjvm.so, libjvm_g.so, gamma_g) at sun.jvm.hotspot.HotSpotTypeDataBase.lookupInProcess(HotSpotTypeDataBase.java:390) at sun.jvm.hotspot.HotSpotTypeDataBase.getLongValueFromProcess(HotSpotTypeDataBase.java:371) at sun.jvm.hotspot.HotSpotTypeDataBase.readVMTypes(HotSpotTypeDataBase.java:102) at sun.jvm.hotspot.HotSpotTypeDataBase.<init>(HotSpotTypeDataBase.java:85) at sun.jvm.hotspot.bugspot.BugSpotAgent.setupVM(BugSpotAgent.java:568) at sun.jvm.hotspot.bugspot.BugSpotAgent.go(BugSpotAgent.java:494) at sun.jvm.hotspot.bugspot.BugSpotAgent.attach(BugSpotAgent.java:332) at sun.jvm.hotspot.tools.Tool.start(Tool.java:163) at sun.jvm.hotspot.tools.HeapDumper.main(HeapDumper.java:77) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.tools.jmap.JMap.runTool(JMap.java:179) at sun.tools.jmap.JMap.main(JMap.java:110) Debugger attached successfully. sun.jvm.hotspot.tools.HeapDumper requires a java VM process/core!

    Read the article

  • Python SQLite FTS3 alternatives?

    - by Mike Cialowicz
    Are there any good alternatives to SQLite + FTS3 for python? I'm iterating over a series of text documents, and would like to categorize them according to some text queries. For example, I might want to know if a document mentions the words "rating" or "upgraded" within three words of "buy." The FTS3 syntax for this query is the following: (rating OR upgraded) NEAR/3 buy That's all well and good, but if I use FTS3, this operation seems rather expensive. The process goes something like this: # create an SQLite3 db in memory conn = sqlite3.connect(':memory:') c = conn.cursor() c.execute('CREATE VIRTUAL TABLE fts USING FTS3(content TEXT)') conn.commit() Then, for each document, do something like this: #insert the document text into the fts table, so I can run a query c.execute('insert into fts(content) values (?)', content) conn.commit() # execute my FTS query here, look at the results, etc # remove the document text from the fts table before working on the next document c.execute('delete from fts') conn.commit() This seems rather expensive to me. The other problem I have with SQLite FTS is that it doesn't appear to work with Python 2.5.4. The 'CREATE VIRTUAL TABLE' syntax is unrecognized. This means that I'd have to upgrade to Python 2.6, which means re-testing numerous existing scripts and programs to make sure they work under 2.6. Is there a better way? Perhaps a different library? Something faster? Thank you.

    Read the article

  • php scandir --> search for files/directories

    - by Peter
    Hi! I searched before I ask, without lucky.. I looking for a simple script for myself, which I can search for files/folders. Found this code snippet in the php manual (I think I need this), but it is not work for me. "Was looking for a simple way to search for a file/directory using a mask. Here is such a function. By default, this function will keep in memory the scandir() result, to avoid scaning multiple time for the same directory." <?php function sdir( $path='.', $mask='*', $nocache=0 ){ static $dir = array(); // cache result in memory if ( !isset($dir[$path]) || $nocache) { $dir[$path] = scandir($path); } foreach ($dir[$path] as $i=>$entry) { if ($entry!='.' && $entry!='..' && fnmatch($mask, $entry) ) { $sdir[] = $entry; } } return ($sdir); } ?> Thank you for any help, Peter

    Read the article

  • using indexer to retrieve Linq to SQL object from datastore

    - by fearofawhackplanet
    class UserDatastore : IUserDatastore { ... public IUser this[Guid userId] { get { User user = (from u in _dataContext.Users where u.Id == userId select u).FirstOrDefault(); return user; } } ... } One of the developers in our team is arguing that an indexer in the above situation is not appropriate and that a GetUser(Guid id) method should be prefered. The arguments being that: 1) We aren't indexing into an in-memory collection, the indexer is basically performing a hidden SQL query 2) Using a Guid in an indexer is bad (FxCop flagged this also) 3) Returning null from an indexer isn't normal behaviour 4) An API user generally wouldn't expect any of this behaviour I agree to an extent with (most of) these points. But I'm also inclined to argue that one of the characteristics of Linq is to abstract the database access to make it appear that you're simply working with a bunch of collections, even though the lazy evaluation paradigm means those collections aren't evaluated until you run a query over them. It doesn't seem inconsistent to me to access the datastore in the same manner as if it was a concrete in-memory collection here. Also bearing in mind this is an inherited codebase which uses this pattern extensively and consistently, is it worth the refactoring? I accept that it might have been better to use a Get method from the start, but I'm not yet convinced that it's completely incorrect to be using an indexer. I'd be interested to hear all opinions, thanks.

    Read the article

  • Help decrypting in ColdFusion passwords created in .NET

    - by KnightStalker
    I have a SQL db storing passwords that were encrypted through a .NET application, that I need to decrypt through a ColdFusion app. I just can't seem to get things set upproperly for the CF decryption to work. Any help would by appreciated. Thanks. The .NET decryption code is: public string Decrypt(string input) { try { DESCryptoServiceProvider des = new DESCryptoServiceProvider(); int ZeroBasedByteCount = (input.Length / 2); //Put the input string into the byte array byte[] inputByteArray = new byte[ZeroBasedByteCount]; int i; int x; for (x = 0;x<ZeroBasedByteCount;x++) { i = (Convert.ToInt32(input.Substring(x * 2, 2), 16)); inputByteArray[x] = (byte)i; } //Create the crypto objects des.Key = ASCIIEncoding.ASCII.GetBytes(key); des.IV = ASCIIEncoding.ASCII.GetBytes(key); MemoryStream ms = new MemoryStream(); CryptoStream cs = new CryptoStream(ms, des.CreateDecryptor(), CryptoStreamMode.Write); //Flush the data through the crypto stream into the memory stream cs.Write(inputByteArray, 0, inputByteArray.Length); cs.FlushFinalBlock(); //Get the decrypted data back from the memory stream StringBuilder ret = new StringBuilder(); foreach(byte b in ms.ToArray()) { ret.Append((char)b); } return ret.ToString(); } catch(Exception ex) { throw(ex); return null; } }

    Read the article

  • x86_64 printf segfault after brk call

    - by gmb11
    While i was trying do use brk (int 0x80 with 45 in %rax) to implement a simple memory manager program in assembly and print the blocks in order, i kept getting segfault. After a while i could only reproduce the error, but have no idea why is this happening: .section .data helloworld: .ascii "hello world" .section .text .globl _start _start: push %rbp mov %rsp, %rbp movq $45, %rax movq $0, %rbx #brk(0) should just return the current break of the programm int $0x80 #incq %rax #segfault #addq $1, %rax #segfault movq $0, %rax #works fine? #addq $1, %rax #segfault again? movq $helloworld, %rdi call printf movq $1, %rax #exit int $0x80 In the example here, if the commented lines are uncommented, i have a segfault, but some commands (like de movq $0, %rax) work just fine. In my other program, the first couple printf work, but the third crashes... Looking for other questions, i heard that printf sometimes allocates some memory, and that the brk shouldn't be used, because in this case it corrupts the heap or something... I'm very confused, does anyone know something about that? EDIT: I've just found out that for printf to work you need %rax=0.

    Read the article

  • Is there a disassembler + debugger for java (ala OllyDbg / SoftICE for assembler)?

    - by Ran Biron
    Is there a utility similar to OllyDbg / SoftICE for java? I.e. execute class (from jar / with class path) and, without source code, show the disassembly of the intermediate code with ability to step through / step over / search for references / edit specific intermediate code in memory / apply edit to file... If not, is it even possible to write something like this (assuming we're willing to live without hotspot for the debug duration)? Edit: I'm not talking about JAD or JD or Cavaj. These are fine decompilers, but I don't want a decompiler for several reasons, most notable is that their output is incorrect (at best, sometimes just plain wrong). I'm not looking for a magical "compiled bytes to java code" - I want to see the actual bytes that are about to be executed. Also, I'd like the ability to change those bytes (just like in an assembly debugger) and, hopefully, write the changed part back to the class file. Edit2: I know javap exists - but it does only one way (and without any sort of analysis). Example (code taken from the vmspec documentation): From java code, we use "javac" to compile this: void setIt(int value) { i = value; } int getIt() { return i; } to a java .class file. Using javap -c I can get this output: Method void setIt(int) 0 aload_0 1 iload_1 2 putfield #4 5 return Method int getIt() 0 aload_0 1 getfield #4 4 ireturn This is OK for the disassembly part (not really good without analysis - "field #4 is Example.i"), but I can't find the two other "tools": A debugger that goes over the instructions themselves (with stack, memory dumps, etc), allowing me to examine the actual code and environment. A way to reverse the process - edit the disassembled code and recreate the .class file (with the edited code).

    Read the article

  • DDK/WDM developing problem ... driver won't load on x64 windows platform

    - by user295975
    Hi there! I am a beginner at DDK/WDM driver developing field. I have a task which involves porting a virtual device driver from x86 to x64 (intel). I got the source code, I modified it a bit and compiled it succesfuly with DDK (build environments). But when I tried to load it on a ia64 Windows7 machine it didn't want to load. Then I tried some simple examples of device drivers from --http://www.codeproject.com/KB/system/driverdev.aspx (I put '--' to be able to post the hyperlink) and from other links but still the same problem. I hear on a forum that some libraries that you use to link are not compatible with the new machines and suggested to link to another similar libraries...but still didn't worked. When I build I use "-cefw" command line parameters as suggested. I do not have an *.inf file asociated but I'm copying it in system32/drivers and I'm using WinObj to see if next restart it's loaded into the memory. I also tried this program ( http://www.codeproject.com/KB/system/tdriver.aspx ) to load the driver into the memory but still didn't worked for me. Please please help me...I'm stuck on this and my deadline already passed. I feel I'driving nuts in here trying to discover what am I doing wrong.

    Read the article

  • Linking the Linker script file to source code

    - by user304097
    Hello , I am new to GNU compiler. I have a C source code file which contains some structures and variables in which I need to place certain variables at a particular locations. So, I have written a linker script file and used the __ attribute__("SECTION") at variable declaration, in C source code. I am using a GNU compiler (cygwin) to compile the source code and creating a .hex file using -objcopy option, but I am not getting how to link my linker script file at compilation to relocate the variables accordingly. I am attaching the linker script file and the C source file for the reference. Please help me link the linker script file to my source code, while creating the .hex file using GNU. /*linker script file*/ /*defining memory regions*/ MEMORY { base_table_ram : org = 0x00700000, len = 0x00000100 /*base table area for BASE table*/ mem2 : org =0x00800200, len = 0x00000300 /* other structure variables*/ } /*Sections directive definitions*/ SECTIONS { BASE_TABLE : { } > base_table_ram GROUP : { .text : { } { *(SEG_HEADER) } .data : { } { *(SEG_HEADER) } .bss : { } { *(SEG_HEADER) } } > mem2 } C source code: const UINT8 un8_Offset_1 __attribute__((section("BASE_TABLE"))) = 0x1A; const UINT8 un8_Offset_2 __attribute__((section("BASE_TABLE"))) = 0x2A; const UINT8 un8_Offset_3 __attribute__((section("BASE_TABLE"))) = 0x3A; const UINT8 un8_Offset_4 __attribute__((section("BASE_TABLE"))) = 0x4A; const UINT8 un8_Offset_5 __attribute__((section("BASE_TABLE"))) = 0x5A; const UINT8 un8_Offset_6 __attribute__((section("SEG_HEADER"))) = 0x6A; My intention is to place the variables of section "BASE_TABLE" at the address defined i the linker script file and the remaining variables at the "SEG_HEADER" defined in the linker script file above. But after compilation when I look in to the .hex file the different section variables are located in different hex records, located at an address of 0x00, not the one given in linker script file . Please help me in linking the linker script file to source code. Are there any command line options to link the linker script file, if any plese provide me with the info how to use the options. Thanks in advance, SureshDN.

    Read the article

< Previous Page | 237 238 239 240 241 242 243 244 245 246 247 248  | Next Page >