Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 561/701 | < Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • Need help in reading callgrind output

    - by n179911
    Hi, I have run callgrind with my application like this "valgrind --tool=callgrind MyApplication" and then call 'callgrind_annotate --auto=yes ./callgrind.out.2489' I see output like 768,097,560 PROGRAM TOTALS -------------------------------------------------------------------------------- Ir file:function -------------------------------------------------------------------------------- 18,624,794 /build/buildd/eglibc-2.11.1/elf/dl-lookup.c:do_lookup_x [/lib/ld-2.11.1.so] 18,149,492 /src/js/src/jsgc.cpp:JS_CallTracer'2 [/src/firefox-debug-objdir/js/src/libmozjs.so] 16,328,897 /src/layout/style/nsCSSDataBlock.cpp:nsCSSExpandedDataBlock::DoAssertInitialState() [/src/firefox-debug-objdir/toolkit/library/libxul.so] 13,376,634 /build/buildd/eglibc-2.11.1/nptl/pthread_getspecific.c:pthread_getspecific [/lib/libpthread-2.11.1.so] 13,005,623 /build/buildd/eglibc-2.11.1/malloc/malloc.c:_int_malloc [/lib/libc-2.11.1.so] 10,404,453 ???:0x0000000000009190 [/usr/lib/libpangocairo-1.0.so.0.2800.0] 10,358,646 /src/xpcom/io/nsFastLoadFile.cpp:NS_AccumulateFastLoadChecksum(unsigned int*, unsigned char const*, unsigned int, int) [/src/firefox-debug-objdir/toolkit/library/libxul.so] 8,543,634 /src/js/src/jsscan.cpp:js_GetToken [/src/firefox-debug-objdir/js/src/libmozjs.so] 7,451,273 /src/xpcom/typelib/xpt/src/xpt_arena.c:XPT_ArenaMalloc [/src/firefox-debug-objdir/toolkit/library/libxul.so] 7,335,131 ???:g_type_check_instance_is_a [/usr/lib/libgobject-2.0.so.0.2400.0] I have a few questions: What does the number on the right mean? Does it mean it spend accumulative that long in calling the function on the right? How can I tell how many times that function has been called and Does that include the time spend in calling the functions called by that function? What does line with ??? mean? e.g. ???:0x0000000000009190 [/usr/lib/libpangocairo-1.0.so.0.2800.0] Thank you.

    Read the article

  • Personal Cache vs Memcache?

    - by Kerry
    I have a personal caching class, which can be seen here ( based off WordPress' ): http://pastie.org/988427 I recently learned about memcache and it said to memcache EVERYTHING: http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this? The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem? Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?

    Read the article

  • PHP Single Sign On (SSO) generating new session id

    - by bigstylee
    I am trying to create a single sign on process. The method I have implemented makes use of storing session data in a database. When a new user comes to the website (www.example2.com) a table of authentication is checked. As this is their first visit to the website, there will be no match. The browser is redicted to the authentication server www.example1.com/authenticate.php?session_id=ABC123 where ABC123 represents the session id created on www.example2.com. THe session id which is then generated on www.example1.com is stored along side the session id using the parameter set in the URL. The user is then redirected back to the www.example2.com and a match of session ids should be found. This WAS working fine in FireFox but when I tried it in Chrome I noticed that the session id being generated when the browser is redirected back to www.example2.com is a new session id. As a result an infinite loop is created. This behaviour has not manifested itself in FireFox aswell. What is causing the new session id to be generated? More importantly, what can I do to stop it? Thanks in advance! EDIT I had a logically error that was causing an infinite loop. This now works fine again in FireFox but the infinite loop is still occuring in Chrome and Internet Explorer.

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • Reading Code - helpful visualizers and browser tools

    - by wishi_
    Hi! I find myself reading 10 times more code than writing. My IDEs all are optimized to make me edit code - with completion, code assist, outlines etc. However if I'm checking out a completely new project: getting into the application's logics isn't optimized with these IDE features. Because I cannot extend what I don't fully understand. If you for example check out a relatively new project, frama-c, you realize that it has got plugins that are helpful to gain insight into "unfamiliar code": http://frama-c.com/plugins.html - However of course the project has a different scope. What I'm fully aware of. I'm looking for something that does helpful things for code-reading. Like: providing a graph, - reverse engineering UML e g., showing variable scopes showing which parts are affected by attempted modifications visualizing data-flow semantics showing tag-lists of heavily utilized functions ... My hope is that something like that exists. - That there're some Eclipse plugins I don't know or that there's a code-browser that has some of these features?

    Read the article

  • How to use Profile in asp.net?

    - by Phsika
    i try to learn asp.net Profile management. But i added below xml firstName,LastName and others. But i cannot write Profile. if i try to write Profile property. drow my editor Profile : Error 1 The name 'Profile' does not exist in the current context C:\Documents and Settings\ykaratoprak\Desktop\Security\WebApp_profile\WebApp_profile\Default.aspx.cs 18 13 WebApp_profile How can i do that? <authentication mode="Windows"/> <profile> <properties> <add name="FirstName"/> <add name="LastName"/> <add name="Age"/> <add name="City"/> </properties> </profile> protected void Button1_Click(object sender, System.EventArgs e) { Profile.FirstName = TextBox1.Text; Profile.LastName = TextBox2.Text; Profile.Age = TextBox3.Text; Profile.City = TextBox4.Text; Label1.Text = "Profile stored successfully!<br />" + "<br />First Name: " + Profile.FirstName + "<br />Last Name: " + Profile.LastName + "<br />Age: " + Profile.Age + "<br />City: " + Profile.City; }

    Read the article

  • C++ Map Question

    - by Wallace
    Hi. I'm working on my C++ assignment about soccer and I encountered a problem with map. My problem that I encountered is that when I stored 2 or more "midfielders" as the key, even the cout data shows different, but when I do a multiplication on the 2nd -second value, it "adds up" the first -second value and multiply with it. E.g. John midfielder 1 Steven midfielder 3 I have a program that already reads in the playerPosition. So the map goes like this: John 1 (Key, Value) Steven 3 (Key, Value) if(playerName == a-first && playerPosition == "midfielder") { cout << a-second*2000 << endl; //number of goals * $2000 } So by right, the program should output: 2000 6000 But instead, I'm getting 2000 8000 So, I'm assuming it adds the 1 to 3 (resulting in 4) and multiplying with 2000, which is totally wrong... I tried cout a-first and a-second in the program and I get: John 1 Steven 3 But after the multiplication, it's totally different. Any ideas? Thanks.

    Read the article

  • ANSI or OEM Codepage when using MME and DirectMusic?

    - by Carl Seleborg
    Hello, I noticed that when reading MIDI port names from MME, the names are multi-byte strings encoded using the ANSI Codepage, which my app uses by default. When receiving those names from the DirectMusic driver, the names are wide-character strings encoded with the OEM Codepage. See this article by Raymond Chen for a quick refresher on Codepages. On my German system, this means that when using the current codepage, which turns out to be the ANSI one, I get "Audiogerät" from MME, and "Audiogeröt" from DirectMusic, the latter being wrong. This gets fixed when I treat that last name as OEM-encoded instead. So how do I know with which codepage to decode those names? Why does the name coming from DirectMusic get encoded differently? Does it come from the USB driver? The COM framework? DirectMusic? How can I know for sure which codepage to use when reading the names of my MIDI ports? For info: I use the MultiByteToWideChar() and WideCharToMultiByte() functions to perform the conversions, with CP_ACP and CP_OEMCP as argument for the codepage to use. I use midiInGetDeviceCaps() to get MIDI port information from the MME subsystem... ... and convert MIDIINCAPS.szPname using the CP_ACP (ANSI) codepage. I use IID_IDirectMusic8::EnumPort() to get port information from DirectMusic... ... and convert DMUS_PORTCAPS.wszDescription using the CP_OEMCP codepage.

    Read the article

  • PHP Session data not being saved

    - by Crackerjack
    I have one of those "I swear I didn't touch the server" situations. I honestly didn't touch any of the php scripts. The problem I am having is that php data is not being saved across different pages or page refreshes. I know a new session is being created correctly because I can set a session variable (e.g. $_SESSION['foo'] = "foo" and print it back out on the same page just fine. But when I try to use that same variable on another page it is not set! Is there any php functions or information I can use on my hosts server to see what is going on? Here is an example script that does not work on my hosts' server as of right now: <?php session_start(); if(isset($_SESSION['views'])) $_SESSION['views'] = $_SESSION['views']+ 1; else $_SESSION['views'] = 1; echo "views = ". $_SESSION['views']; echo '<p><a href="page1.php">Refresh</a></p>'; ?> The 'views' variable never gets incremented after doing a page refresh. I'm thinking this is a problem on their side, but I wanted to make sure I'm not a complete idiot first. Here is the phpinfo() for my hosts' server (PHP Version 4.4.7):

    Read the article

  • Proper use of HttpRequestInterceptor and CredentialsProvider in doing preemptive authentication with

    - by Preston
    I'm writing an application in Android that consumes some REST services I've created. These web services aren't issuing a standard Apache Basic challenge / response. Instead in the server-side code I'm wanting to interrogate the username and password from the HTTP(S) request and compare it against a database user to make sure they can run that service. I'm using HttpClient to do this and I have the credentials stored on the client after the initial login (at least that's how I see this working). So here is where I'm stuck. Preemptive authenticate under HttpClient requires you to setup an interceptor as a static member. This is the example Apache Components uses. HttpRequestInterceptor preemptiveAuth = new HttpRequestInterceptor() { @Override public void process( final HttpRequest request, final HttpContext context) throws HttpException, IOException { AuthState authState = (AuthState) context.getAttribute(ClientContext.TARGET_AUTH_STATE); CredentialsProvider credsProvider = (CredentialsProvider) context.getAttribute( ClientContext.CREDS_PROVIDER); HttpHost targetHost = (HttpHost) context.getAttribute(ExecutionContext.HTTP_TARGET_HOST); if (authState.getAuthScheme() == null) { AuthScope authScope = new AuthScope(targetHost.getHostName(), targetHost.getPort()); Credentials creds = credsProvider.getCredentials(authScope); if (creds != null) { authState.setAuthScheme(new BasicScheme()); authState.setCredentials(creds); } } } }; So the question would be this. What would the proper use of this be? Would I spin this up as part of the application when the application starts? Pulling the username and password out of memory and then using them to create this CredentialsProvider which is then utilized by the HttpRequestInterceptor? Or is there a way to do this more dynamically?

    Read the article

  • NSArrayController not working with NSMutableDictionary for NSTableView

    - by Miraaj
    Hi all, I am trying to display content in NSTableView using NSMutableArrayController of NSMutableDictionary records. I followed steps written below: In application delegate class, I created an NSMutableArray object with name 'geniuses' and stored some NSMutableDictionary objects with keys: 'geniusName' and 'domain'. I took an NSArrayController object in IB, binded its controller content property to application delegate class, and set its model key path to 'geniuses'. In attribute inspector pane set mode as class and class name as NSMutableDictionary. Added keys: 'geniusName' and 'domain' to it. In IB I took a table view object. Binded its content property to array controller, controller key path set as arranged objects. Binded value property of its first column to array controller, controller key path set as arranged objects, model key path set as 'geniusName'. Binded value property of its second column to array controller, controller key path set as arranged objects, model key path set as 'geniusName'. After following these steps when I tried to build and run the project I found un-populated table view. You can find my code here. Can anyone suggest me where I may be wrong? Thanks, Miraaj

    Read the article

  • Chord Chart - Skip to key with a click

    - by Juan Gonzales
    I have a chord chart app that basically can transpose a chord chart up and down throughout the keys, but now I would like to expand that app and allow someone to pick a key and automatically go to that key upon a click event using a function in javascript or jquery. Can someone help me figure this out? The logic seems simple enough, but I'm just not sure how to implement it. Here are my current functions that allow the user to transpose up and down... var match; var chords = ['C','C#','D','D#','E','F','F#','G','G#','A','A#','B','C','Db','D','Eb','E','F','Gb','G','Ab','A','Bb','B','C']; var chords2 = ['C','Db','D','Eb','E','F','Gb','G','Ab','A','Bb','B','C','C#','D','D#','E','F','F#','G','G#','A','A#','C']; var chordRegex = /(?:C#|D#|F#|G#|A#|Db|Eb|Gb|Ab|Bb|C|D|E|F|G|A|B)/g; function transposeUp(x) { $('.chord'+x).each(function(){ ///// initializes variables ///// var currentChord = $(this).text(); // gatheres each object var output = ""; var parts = currentChord.split(chordRegex); var index = 0; ///////////////////////////////// while (match = chordRegex.exec(currentChord)){ var chordIndex = chords2.indexOf(match[0]); output += parts[index++] + chords[chordIndex+1]; } output += parts[index]; $(this).text(output); }); } function transposeDown(x){ $('.chord'+x).each(function(){ var currentChord = $(this).text(); // gatheres each object var output = ""; var parts = currentChord.split(chordRegex); var index = 0; while (match = chordRegex.exec(currentChord)){ var chordIndex = chords2.indexOf(match[0],1); //var chordIndex = $.inArray(match[0], chords, -1); output += parts[index++] + chords2[chordIndex-1]; } output += parts[index]; $(this).text(output); }); } Any help is appreciated. Answer will be accepted! Thank You

    Read the article

  • Internal "Tee" setup

    - by RadlyEel
    I have inherited some really old VC6.0 code that I am upgrading to VS2008 for building a 64-bit app. One required feature that was implemented long, long ago is overriding std::cout so its output goes simultaneously to a console window and to a file. The implementation depended on the then-current VC98 library implementation of ostream and, of course, is now irretrievably broken with VS2008. It would be reasonable to accumulate all the output until program termination time and then dump it to a file. I got part of the way home by using freopen(), setvbuf(), and ios::sync_with_stdio(), but to my dismay, the internal library does not treat its buffer as a ring buffer; instead when it flushes to the output device it restarts at the beginning, so every flush wipes out all my accumulated output. Converting to a more standard logging function is not desirable, as there are over 1600 usages of "std::cout << " scattered throughout almost 60 files. I have considered overriding ostream's operator<< function, but I'm not sure if that will cover me, since there are global operator<< functions that can't be overridden. (Or can they?) Any ideas on how to accomplish this?

    Read the article

  • ConcurrentLinkedQueue$Node remains in heap after remove()

    - by action8
    I have a multithreaded app writing and reading a ConcurrentLinkedQueue, which is conceptually used to back entries in a list/table. I originally used a ConcurrentHashMap for this, which worked well. A new requirement required tracking the order entries came in, so they could be removed in oldest first order, depending on some conditions. ConcurrentLinkedQueue appeared to be a good choice, and functionally it works well. A configurable amount of entries are held in memory, and when a new entry is offered when the limit is reached, the queue is searched in oldest-first order for one that can be removed. Certain entries are not to be removed by the system and wait for client interaction. What appears to be happening is I have an entry at the front of the queue that occurred, say 100K entries ago. The queue appears to have the limited number of configured entries (size() == 100), but when profiling, I found that there were ~100K ConcurrentLinkedQueue$Node objects in memory. This appears to be by design, just glancing at the source for ConcurrentLinkedQueue, a remove merely removes the reference to the object being stored but leaves the linked list in place for iteration. Finally my question: Is there a "better" lazy way to handle a collection of this nature? I love the speed of the ConcurrentLinkedQueue, I just cant afford the unbounded leak that appears to be possible in this case. If not, it seems like I'd have to create a second structure to track order and may have the same issues, plus a synchronization concern.

    Read the article

  • How should I handle this Optimistic Concurrency error in this Entity Framework code, I have?

    - by Pure.Krome
    Hi folks, I have the following pseduo code in some Repository Pattern project that uses EF4. public void Delete(int someId) { // 1. Load the entity for that Id. If there is none, then null. // 2. If entity != null, then DeleteObject(..); } Pretty simple but I'm getting a run-time error:- ConcurrencyException: Store, Update, Insert or Delete statement affected an unexpected number of rows (0). Now, this is what is happening :- Two instances of EF4 are running inthe app at the same time. Instance A calls delete. Instance B calls delete a nano second later. Instance A loads the entity. Instance B also loads the entity. Instance A now deletes that entity - cool bananas. Instance B tries to delete the entity, but it's already gone. As such, the no-count or what not is 0, when it expected 1 .. or something like that. Basically, it figured out that the item it is suppose to delete, didn't delete (because it happened a split sec ago). I'm not sure if this is like a race-condition or something. Anyways, is there any tricks I can do here so the 2nd call doesn't crash? I could make it into a stored procedure.. but I'm hoping to avoid that right now. Any ideas? I'm wondering If it's possible to lock that row (and that row only) when the select is called ... forcing Instance B to wait until the row lock has been relased. By that time, the row is deleted, so when Instance B does it's select, the data is not there .. so it will never delete.

    Read the article

  • CakePHP: Custom Function in bootstrap that uses $ajax->link not working

    - by nekko
    Hello I have two questions: (1) Is it best practice to create global custom functions in the bootstrap file? Is there a better place to store them? (2) I am unable use the following line of code in my custom function located in my bootstrap.php file: $url = $ajax->link ( 'Delete', array ('controller' => 'events', 'action' => 'delete', 22 ), array ('update' => 'event' ), 'Do you want to delete this event?' ); echo $url; I receive the following error: Notice (8): Undefined variable: ajax [APP\config\bootstrap.php, line 271] Code } function testAjax () { $url = $ajax->link ( 'Delete', array ('controller' => 'events', 'action' => 'delete', 22 ), array ('update' => 'event' ), 'Do you want to delete this event?' ); testAjax - APP\config\bootstrap.php, line 271 include - APP\views\event\queue.ctp, line 19 View::_render() - CORE\cake\libs\view\view.php, line 649 View::render() - CORE\cake\libs\view\view.php, line 372 Controller::render() - CORE\cake\libs\controller\controller.php, line 766 Dispatcher::_invoke() - CORE\cake\dispatcher.php, line 211 Dispatcher::dispatch() - CORE\cake\dispatcher.php, line 181 [main] - APP\webroot\index.php, line 91 However it works as intended if I place that same code in my view: <a onclick=" event.returnValue = false; return false;" id="link1656170149" href="/shout/events/delete/22">Delete</a> Please help :) Thanks in advance!!

    Read the article

  • APC values randomly disappear

    - by Michael
    I'm using APC for storing a map of class names to class file paths. I build the map like this in my autoload function: $class_paths = apc_fetch('class_paths'); // If the class path is stored in application cache - search finished. if (isset($class_paths[$class])) { return require_once $class_paths[$class]; // Otherwise search in known places } else { // List of places to look for class $paths = array( '/src/', '/modules/', '/libs/', ); // Search directories and store path in cache if found. foreach ($paths as $path) { $file = DOC_ROOT . $path . $class . '.php'; if (file_exists($file)) { echo 'File was found in => ' . $file . '<br />'; $class_paths[$class] = $file; apc_store('class_paths', $class_paths); return require_once $file; } } } I can see as more and more classes are loaded, they are added to the map, but at some point the apc_fetch returns NULL in the middle of a page request, instead of returning the map. Getting => class_paths Array ( [MCS\CMS\Helper\LayoutHelper] => /Users/mbl/Documents/Projects/mcs_ibob/core/trunk/src/MCS/CMS/Helper/LayoutHelper.php [MCS\CMS\Model\Spot] => /Users/mbl/Documents/Projects/mcs_ibob/core/trunk/src/MCS/CMS/Model/Spot.php ) Getting => class_paths {null} Many times the cached value will also be gone between page requests. What could be the reason for this? I'm using APC as an extension (PECL) running PHP 5.3.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • javascript plugin - singleton pattern?

    - by Adam Kiss
    intro Hello, I'm trying to develop some plugin or/*and* object in javascript, which will control some properties over some object. It will also use jQuery, to ease development. idea This is in pseudocode to give you an idea, since I can't really describe it in english with the right words, it's impossible to go and use google to find out what I want to use (and learn). I will have plugin (maybe object?) with some variables and methods: plugin hideshow(startupconfig){ var c, //collection add: function(what){ c += what; }, do: function(){ c.show().hide().stop(); //example code } } and I will use it this way (or sort-of): var p = new Plugin(); p .add($('p#simple')) .add($('p#simple2')) .do(); note I'm not really looking for jQuery plugin tutorials - it's more like usage of singleton pattern in document in javascript, jQuery is mentione only because it will be used to modify dom and simplify selectors, jQuery plugin maybe just for that one little function add. I'm looking for something, that will sit on top of my document and call functions from itselft based on timer, or user actions, or whatever. problems I don't really know where to start with construction of this object/plugin I'm not really sure how to maintain one variable, which is collection of jQuery objects (something like $('#simple, #simple2');) I would maybe like to extedn jQuery with $.fn.addToPlugin to use chaining of objects, but that should be simple (really just each( p.add($(this)); )) I hope I make any sense, ideas, links or advices appreciated.

    Read the article

  • Boost::thread mutex issue: Try to lock, access violation

    - by user1419305
    I am currently learning how to multithread with c++, and for that im using boost::thread. I'm using it for a simple gameengine, running three threads. Two of the threads are reading and writing to the same variables, which are stored inside something i call PrimitiveObjects, basicly balls, plates, boxes etc. But i cant really get it to work, i think the problem is that the two threads are trying to access the same memorylocation at the same time, i have tried to avoid this using mutex locks, but for now im having no luck, this works some times, but if i spam it, i end up with this exception: First-chance exception at 0x00cbfef9 in TTTTT.exe: 0xC0000005: Access violation reading location 0xdddddded. Unhandled exception at 0x77d315de in TTTTT.exe: 0xC0000005: Access violation reading location 0xdddddded. These are the functions inside the object that im using for this, and the debugger is also blaming them for the exception. void PrimitiveObj::setPos(glm::vec3 in){ boost::mutex mDisposingMutex; boost::try_mutex::scoped_try_lock lock(mDisposingMutex); if ( lock) { position = in; boost::try_mutex::scoped_try_lock unlock(mDisposingMutex); } } glm::vec3 PrimitiveObj::getPos(){ boost::mutex myMutex; boost::try_mutex::scoped_try_lock lock(myMutex); if ( lock) { glm::vec3 curPos = position; boost::try_mutex::scoped_try_lock unlock(myMutex); return curPos; } return glm::vec3(0,0,0); } Any ideas?

    Read the article

  • Pointer Implementation Details in C

    - by Will Bickford
    I would like to know architectures which violate the assumptions I've listed below. Also I would like to know if any of the assumptions are false for all architectures (i.e. if any of them are just completely wrong). sizeof(int *) == sizeof(char *) == sizeof(void *) == sizeof(func_ptr *) The in-memory representation of all pointers for a given architecture is the same regardless of the data type pointed to. The in-memory representation of a pointer is the same as an integer of the same bit length as the architecture. Multiplication and division of pointer data types are only forbidden by the compiler. NOTE: Yes I know this is nonsensical. What I mean is - is there hardware support to forbid this incorrect usage? All pointer values can be casted to a single integer. In other words, what architectures still make use of segments and offsets? Incrementing a pointer is equivalent to adding sizeof(the pointed data type) to the memory address stored by the pointer. If p is an int32* then p+1 is equal to the memory address 4 bytes after p. I'm most used to pointers being used in a contiguous, virtual memory space. For that usage, I can generally get by thinking of them as addresses on a number line. See (http://stackoverflow.com/questions/1350471/pointer-comparison/1350488#1350488).

    Read the article

  • .Net Entity objectcontext thread error

    - by Chris Klepeis
    I have an n-layered asp.net application which returns an object from my DAL to the BAL like so: public IEnumerable<SourceKey> Get(SourceKey sk) { var query = from SourceKey in _dataContext.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query.AsEnumerable(); } This result passes through my business layer and hits the UI layer to display to the end users. I do not lazy load to prevent query execution in other layers of my application. I created another function in my DAL to delete objects: public void Delete(SourceKey sk) { try { _dataContext.DeleteObject(sk); _dataContext.SaveChanges(); } catch (Exception ex) { Debug.WriteLine(ex.Message + " " + ex.StackTrace + " " + ex.InnerException); } } When I try to call "Delete" after calling the "Get" function, I receive this error: New transaction is not allowed because there are other threads running in the session This is an ASP.Net app. My DAL contains an entity data model. The class in which I have the above functions share the same _dataContext, which is instantiated in my constructor. My guess is that the reader is still open from the "Get" function and was not closed. How can I close it?

    Read the article

  • Where can I find a professional image gallery built on a javascript framework?

    - by user278457
    I'm looking to find a galleria replacement, hopefully using jQuery but other javascript frameworks such as prototype or mootools are fine too. I used galleria a while back, and I need a similar product now. Unfortunately, the devkick.com domain seems to have disappeared in the meantime and I'm wary of using products that aren't actively maintained. I'm willing to pay up to $50 per site for licensing costs, if the product meets my needs. I'm specifically looking for a gallery with the following features: Every image in the gallery preloads asap, not as the user clicks "next" Minimalist default css to keep my subsequent styling headaches down, preferably a "darkroom" style by default, much as galleria looks Each element that constructs the image gallery should be simple and logical to reference with CSS As easy to install as adding a css class to a single unordered list No dependencies other than the core jQuery/other library, including "easing" and other effects must be optional Works on browsers back to IE6, Firefox 3, Safari (and iPhone), Chrome, Opera Has a javascript API that lets me trigger callback functions on common events such as "user clicks next" or "image loads" degrades gracefully without javascript, either displays images as a list, or just displays the first image in the list bonus: The gallery can display other content, such as video or external sites, like the modal boxes at shadowbox-js.com well documented minimal bandwidth requirement - .js file should be ~10kb minified bonus: The gallery source is hosted on a reliable CDN like google's bonus: Thumbnails for images do not appear until the main image has loaded bonus: includes ability to set parameters with JSON to change common behaviours, such as slide/fade transitions or automatic image switch every X seconds

    Read the article

  • std::list iterator: get next element

    - by sheepsimulator
    I'm trying to build a string using data elements stored in a std::list, where I want commas placed only between the elements (ie, if elements are {A,B,C,D} in list, result string should be "A,B,C,D". This code does not work: typedef std::list< shared_ptr<EventDataItem> > DataItemList; // ... std::string Compose(DataItemList& dilList) { std::stringstream ssDataSegment; for(iterItems = dilList.begin(); iterItems != dilList.end(); iterItems++) { // Lookahead in list to see if next element is end if((iterItems + 1) == dilList.end()) { ssDataSegment << (*iterItems)->ToString(); } else { ssDataSegment << (*iterItems)->ToString() << ","; } } return ssDataSegment.str(); } How do I get at "the-next-item" in a std::list using an iterator? I would expect that it's a linked-list, why can't I get at the next item?

    Read the article

< Previous Page | 557 558 559 560 561 562 563 564 565 566 567 568  | Next Page >