Search Results

Search found 5057 results on 203 pages for 'force'.

Page 171/203 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • Delphi fsstayontop oddity

    - by TallGuy
    Here is the deal. Main form set to fsnormal. This main form is maximized full screen with a floating toolbar. Toolbar is normal form with style set to fsstayontop. Most fo the time this works as expected. The mainform displays and the toolbar floats over on top of it. Sometimes (this is a bugger to find a reproducable set of steps) when alt-tabbing to and from other apps (or when clicking the delphi app icon on the taskbar) the following symptoms can happen... When alt-tabbing away from the delphi app the floating topmost fsstayontop form stays on top of the other apps. So if I alt-tab to firefox then the floating menu stays on top of firefox too. When alt-tabbing from another app to the delphi app the flaoting menu is not visible (as it is behhind the fsnormal mainform). Is there a known bug or any hacks to force it to work? This also seems to happen most when mutliple copies of the app are running (they have no interaction between them and should be running in their own windows "sandbox"). It is as if delphi gets confused which window is meant to be on top and swaps them or changes the floating form to stayontopofeverything mode. Or have I misunderstood fsstayontop? I am assuming setting a form style to fsstayontop makes it stay on top of all other forms within the current app and not all windows across other running apps. Thanks for any tips or workarounds.

    Read the article

  • Submit button on nested form submits the outer form in IE7

    - by Mike Christensen
    I have the following code on my Home.aspx page: <form id="frmJump" method="post" action="Views/ViewsHome.aspx"> <input name="JumpProject" /><input type="submit" value="Go" /> </form> However, when I click the "Go" button, the page posts back to Home.aspx rather than going to ViewsHome.aspx. I even tried adding some script to force the form to submit: <input name="JumpProject" onkeypress="if(event.keyCode == 13) { this.form.submit(); return false; }" /> But still even if I press ENTER, the Home.aspx page is reloaded. The only thing I can see that might be borking things is this form is actually a child form of the main POSTBACK form that ASP.NET injects into the page. I'm sure there's something stupid I'm missing and this post will get 800 downvotes instantly banishing me back into the n00b realm, but perhaps I haven't gotten enough sleep lately and I'm missing something stupid. This is on IE7 and an ASP.NET 4.0 backend. I also have jQuery libraries loaded on the page incase jQuery can improve this somehow. Thanks!

    Read the article

  • Inheritance and type parameters of Traversable

    - by Jesper
    I'm studying the source code of the Scala 2.8 collection classes. I have questions about the hierarchy of scala.collection.Traversable. Look at the following declarations: package scala.collection trait Traversable[+A] extends TraversableLike[A, Traversable[A]] with GenericTraversableTemplate[A, Traversable] trait TraversableLike[+A, +Repr] extends HasNewBuilder[A, Repr] with TraversableOnce[A] package scala.collection.generic trait HasNewBuilder[+A, +Repr] trait GenericTraversableTemplate[+A, +CC[X] <: Traversable[X]] extends HasNewBuilder[A, CC[A] @uncheckedVariance] Question: Why does Traversable extend GenericTraversableTemplate with type parameters [A, Traversable] - why not [A, Traversable[A]]? I tried some experimenting with a small program with the same structure and got a strange error message when I tried to change it to Traversable[A]: error: Traversable[A] takes no type parameters, expected: one I guess that the use of the @uncheckedVariance annotation in GenericTraversableTemplate also has to do with this? (That seems like a kind of potentially unsafe hack to force things to work...). Question: When you look at the hierarchy, you see that Traversable inherits HasNewBuilder twice (once via TraversableLike and once via GenericTraversableTemplate), but with slightly different type parameters. How does this work exactly? Why don't the different type parameters cause an error?

    Read the article

  • sqlalchemy dynamic mapping

    - by adancu
    Hi, I have the following problem: I have the class: class Word(object): def __init__(self): self.id = None self.columns = {} def __str__(self): return "(%s, %s)" % (str(self.id), str(self.columns)) self.columns is a dict which will hold (columnName:columnValue) values. The name of the columns are known at runtime and they are loaded in a wordColumns list, for example wordColumns = ['english', 'korean', 'romanian'] wordTable = Table('word', metadata, Column('id', Integer, primary_key = True) ) for columnName in wordColumns: wordTable.append_column(Column(columnName, String(255), nullable = False)) I even created a explicit mapper properties to "force" the table columns to be mapped on word.columns[columnName], instead of word.columnName, I don't get any error on mapping, but it seems that doesn't work. mapperProperties = {} for column in wordColumns: mapperProperties['columns[\'%']' % column] = wordTable.columns[column] mapper(Word, wordTable, mapperProperties) When I load a word object, SQLAlchemy creates an object which has the word.columns['english'], word.columns['korean'] etc. properties instead of loading them into word.columns dict. So for each column, it creates a new property. Moreover word.columns dictionary doesn't even exists. The same way, when I try to persist a word, SQLAlchemy expects to find the column values in properties named like word.columns['english'] (string type) instead of the dictionary word.columns. I have to say that my experience with Python and SQLAlchemy is quite limited, maybe it isn't possible to do what I'm trying to do. Any help appreciated, Thanks in advance.

    Read the article

  • The youtube API sometimes throws error: Call to a member function children() on a non-object

    - by Anna Lica
    When i launch the php script, sometime works fine, but many other times it retrieve me this errror Fatal error: Call to a member function children() on a non-object in /membri/americanhorizon/ytvideo/rilevametadatadaurlyoutube.php on line 21 This is the first part of the code // set feed URL $feedURL = 'http://gdata.youtube.com/feeds/api/videos/dZec2Lbr_r8'; // read feed into SimpleXML object $entry = simplexml_load_file($feedURL); $video = parseVideoEntry($entry); function parseVideoEntry($entry) { $obj= new stdClass; // get nodes in media: namespace for media information $media = $entry->children('http://search.yahoo.com/mrss/'); //<----this is the doomed line 21 UPDATE: solution adopted for ($i=0 ; $i< count($fileArray); $i++) { // set feed URL $feedURL = 'http://gdata.youtube.com/feeds/api/videos/'.$fileArray[$i]; // read feed into SimpleXML object $entry = simplexml_load_file($feedURL); if ( is_object($entry)) { $video = parseVideoEntry($entry); echo ($video->description."|".$video->length); echo "<br>"; } else { $i--; } } In this mode i force the script to re-check the file that caused the error

    Read the article

  • Google Data Api returning an invalid access token

    - by kingdavies
    I'm trying to pull a list of contacts from a google account. But Google returns a 401. The url used for requesting an authorization code: String codeUrl = 'https://accounts.google.com/o/oauth2/auth' + '?' + 'client_id=' + EncodingUtil.urlEncode(CLIENT_ID, 'UTF-8') + '&redirect_uri=' + EncodingUtil.urlEncode(MY_URL, 'UTF-8') + '&scope=' + EncodingUtil.urlEncode('https://www.google.com/m8/feeds/', 'UTF-8') + '&access_type=' + 'offline' + '&response_type=' + EncodingUtil.urlEncode('code', 'UTF-8') + '&approval_prompt=' + EncodingUtil.urlEncode('force', 'UTF-8'); Exchanging the returned authorization code for an access token (and refresh token): String params = 'code=' + EncodingUtil.urlEncode(authCode, 'UTF-8') + '&client_id=' + EncodingUtil.urlEncode(CLIENT_ID, 'UTF-8') + '&client_secret=' + EncodingUtil.urlEncode(CLIENT_SECRET, 'UTF-8') + '&redirect_uri=' + EncodingUtil.urlEncode(MY_URL, 'UTF-8') + '&grant_type=' + EncodingUtil.urlEncode('authorization_code', 'UTF-8'); Http con = new Http(); Httprequest req = new Httprequest(); req.setEndpoint('https://accounts.google.com/o/oauth2/token'); req.setHeader('Content-Type', 'application/x-www-form-urlencoded'); req.setBody(params); req.setMethod('POST'); Httpresponse reply = con.send(req); Which returns a JSON array with what looks like a valid access token: { "access_token" : "{access_token}", "token_type" : "Bearer", "expires_in" : 3600, "refresh_token" : "{refresh_token}" } However when I try and use the access token (either in code or curl) Google returns a 401: curl -H "Authorization: Bearer {access_token}" https://www.google.com/m8/feeds/contacts/default/full/ Incidentally the same curl command but with an access token acquired via https://code.google.com/oauthplayground/ works. Which leads me to believe there is something wrong with the exchanging authorization code for access token request as the returned access token does not work. I should add this is all within the expires_in time frame so its not that the access_token has expired

    Read the article

  • Why does the '#weight' property sometimes not have any effect in Drupal forms?

    - by Adrian
    Hello, I'm trying to create a node form for a custom type. I have organic groups and taxonomy both enabled, but want their elements to come out in a non-standard order. So I've implemented hook_form_alter and set the #weight property of the og_nodeapi subarray to -1000, but it still goes after taxonomy and menu. I even tried changing the subarray to a fieldset (to force it to actually be rendered), but no dice. I also tried setting $form['taxonomy']['#weight'] = 1000 (I have two vocabs so it's already being rendered as a fieldset) but that didn't work either. I set the weight of my module very high and confirmed in the system table that it is indeed the highest module on the site - so I'm all out of ideas. Any suggestions? Update: While I'm not exactly sure how, I did manage to get the taxonomy fieldset to sink below everything else, but now I have a related problem that's hopefully more manageable to understand. Within the taxonomy fieldset, I have two items (a tags and a multi-select), and I wanted to add some instructions in hook_form_alter as follows: $form['taxonomy']['instructions'] = array( '#value' => "These are the instructions", '#weight' => -1, ); You guessed it, this appears after the terms inserted by the taxonomy module. However, if I change this to a fieldset: $form['taxonomy']['instructions'] = array( '#type' => 'fieldset', // <-- here '#title' => 'Instructions', // <-- and here for good measure '#value' => "These are the instructions", '#weight' => -1, ); then it magically floats to the top as I'd intended. I also tried textarea (this also worked) and explicitly saying markup (this did not). So basically, changing the type from "markup" (the default IIRC) to "fieldset" has the effect of no longer ignoring its weight.

    Read the article

  • Thread management advice - Is TPL a good idea?

    - by Ian
    I'm hoping to get some advice on the use of thread managment and hopefully the task parallel library, because I'm not sure I've been going down the correct route. Probably best is that I give an outline of what I'm trying to do. Given a Problem I need to generate a Solution using a heuristic based algorithm. I start of by calculating a base solution, this operation I don't think can be parallelised so we don't need to worry about. Once the inital solution has been generated, I want to trigger n threads, which attempt to find a better solution. These threads need to do a couple of things: They need to be initalized with a different 'optimization metric'. In other words they are attempting to optimize different things, with a precedence level set within code. This means they all run slightly different calculation engines. I'm not sure if I can do this with the TPL.. If one of the threads finds a better solution that the currently best known solution (which needs to be shared across all threads) then it needs to update the best solution, and force a number of other threads to restart (again this depends on precedence levels of the optimization metrics). I may also wish to combine certain calculations across threads (e.g. keep a union of probabilities for a certain approach to the problem). This is probably more optional though. The whole system needs to be thread safe obviously and I want it to be running as fast as possible. I tried quite an implementation that involved managing my own threads and shutting them down etc, but it started getting quite complicated, and I'm now wondering if the TPL might be better. I'm wondering if anyone can offer any general guidance? Thanks...

    Read the article

  • Exclude subdirectory from rewrite rule in web.config

    - by Clog
    This question comes up often, but I can only find solutions for PHP, Apache, htaccess etc but not for web.config I would like my pages to return in HTTP not HTTPS, except for forms within certain subdirectories. I have created the following web.config file, but how do I exclude a subdirectory called forms. <configuration> <system.webServer> <rewrite> <rules> <rule name="Force all to HTTP" stopProcessing="true"> <match url="(.*)" /> <conditions> <add input="{HTTPS}" pattern="on" ignoreCase="true" /> </conditions> <action type="Redirect" redirectType="Found" url="http://www.mysite.com/{R:1}" /> </rule> </rules> </rewrite> </system.webServer> </configuration> Many thanks all you clever clogs.

    Read the article

  • MySQL developer here -- Nesting with select * finicky in Oracle 10g?

    - by John Sullivan
    I'm writing a simple diagnostic query then attempting to execute it in the Oracle 10g SQL Scratchpad. EDIT: It will not be used in code. I'm nesting a simple "Select *" and it's giving me errors. In the SQL Scratchpad for Oracle 10g Enterprise Manager Console, this statement runs fine. SELECT * FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sql.sql_text <> ' ' If I try to wrap that up in Select * from () tb2 I get an error, "ORA-00918: Column Ambiguously Defined". I didn't think that could ever happen with this kind of statement so I am a bit confused. select * from (SELECT * FROM v$session sess, v$sql sql WHERE sql.sql_id(+) = sess.sql_id and sql.sql_text <> ' ') tb2 You should always be able to select * from the result set of another select * statement using this structure as far as I'm aware... right? Is Oracle/10g/the scratchpad trying to force me to accept a certain syntactic structure to prevent excessive nesting? Is this a bug in scratchpad or something about how oracle works?

    Read the article

  • Android Java writing text file to sd card

    - by Paul
    I have a strange problem I've come across. My app can write a simple textfile to SD card and sometimes it works for some people but not for others and I have no idea why. Some people it force closes if they put some characters like "..." in it and such. I cannot seem to reproduce it as I've had no troubles but this is the code that handles it. Can anyone think of something that may lead to problems or a better to way to do it? public void generateNoteOnSD(String sFileName, String sBody){ try { File root = new File(Environment.getExternalStorageDirectory(), "Notes"); if (!root.exists()) { root.mkdirs(); } File gpxfile = new File(root, sFileName); FileWriter writer = new FileWriter(gpxfile); writer.append(sBody); writer.flush(); writer.close(); Toast.makeText(this, "Saved", Toast.LENGTH_SHORT).show(); } catch(IOException e) { e.printStackTrace(); importError = e.getMessage(); iError(); } }

    Read the article

  • Some sonatype nexus questions.

    - by smallufo
    I deployed a sonatype nexus server inside my LAN , mapping some remote repositories to my public repositories : First question is , why these repositories not sync with the "real" repositories ? For example , I mapped maven central (http://repo1.maven.org/maven2) to "central" , but when I browse http://smallufo:8081/nexus/content/repositories/central/org/springframework/ , the packages are not complete , in http://repo2.maven.org/maven2/org/springframework/ , there are tons of artifacts , but I only have some of them : And versions are old ... ex : spring-core is only 2.5.6.SEC01 , but the latest version is 3.0.2.RELEASE. And my maven client seems can only find the old artifacts ... "central" is a proxy directory , it should be the same with the remote server. I tried to "Expire Cache" , "ReIndex" , "Incremental ReIndex" the whole "central" : After a long time with almost 100% java process load , the situation seems not better , just add some artifacts ... not reflecting the real "Maven Central" data... Second question , what's difference with "Expire Cache" , "ReIndex" , "Incremental ReIndex" ? Even I can "search" spring-core.3.0.2.RELEASE , my m2eclipse still cannot find it : I can also see the spring-core-3.0.2.RELEASE in the "index" , (but not available in "storage") : But why m2eclipse cannot make use of it ? it seems m2eclipse can only install artifacts in the storage , if this is how nexus works , how do I "force" download spring-core-3.0.2.RELEASE to nexus's storage ? How do I solve these strange incompatibilities ? Thanks a lot !

    Read the article

  • How strict should I be in the "do the simplest thing that could possible work" while doing TDD

    - by Support - multilanguage SO
    For TDD you have to Create a test that fail Do the simplest thing that could possible work to pass the test Add more variants of the test and repeat Refactor when a pattern emerge With this approach you're supposing to cover all the cases ( that comes to my mind at least) but I'm wonder if am I being too strict here and if it is possible to "think ahead" some scenarios instead of simple discover them. For instance, I'm processing a file and if it doesn't conform to a certain format I am to throw an InvalidFormatException So my first test was: @Test void testFormat(){ // empty doesn't do anything... processor.validate("empty.txt"); try { processor.validate("invalid.txt"); assert false: "Should have thrown InvalidFormatException"; } catch( InvalidFormatException ife ) { assert "Invalid format".equals( ife.getMessage() ); } } I run it and it fails because it doesn't throw an exception. So the next thing that comes to my mind is: "Do the simplest thing that could possible work", so I : public void validate( String fileName ) throws InvalidFormatException { if(fileName.equals("invalid.txt") { throw new InvalidFormatException("Invalid format"); } } Doh!! ( although the real code is a bit more complicated, I found my self doing something like this several times ) I know that I have to eventually add another file name and other test that would make this approach impractical and that would force me to refactor to something that makes sense ( which if I understood correctly is the point of TDD, to discover the patterns the usage unveils ) but: Q: am I taking too literal the "Do the simplest thing..." stuff?

    Read the article

  • CSS3PIE issues in IE6 and 8

    - by Gordon
    I'm using CSS3PIE to apply some rounded corners to elements in Internet Explorer that will get them by stylesheet in other browsers. I've run into some issues with it though. In IE8, I discovered that any element that had the PIE behaviour would behave strangely. The container would jump a few pixels to the right, but the content would stay in its original position, giving the appearance that the content had all shifted left relative to its container. This would be especially problematic on elements with no or small amounts of padding. I was able to hack my way around the problem in IE8 by using X-UA-Compatible, but I'd rather avoid this solution if at all possible. I don't have access to IE9 for testing but my understanding hacks like PIE aren't necessary and it would be wasteful to force a compatibility mode in a browser that doesn't need it. I have worse issues in IE6, with the PIE layout breaking down completely on a list that is set up to use display:inline; zoom:1; list items (to simulate inline-block, which works in IE8 and the other browsers). Here the borders of the list items get rendered in completely the wrong place. So ideally, I'd like to have PIE work properly in IE6, and in IE8 without having to resort to compatibility mode. As far as IE6 goes, a graceful fallback where PIE is just not applied will do. IE7 is the only browser where the page displays as intended. I can't provide an example page just at the moment unfortunately, I can add one later though. Follow up: Here are some screen grabs made with IE Tester. I'm hoping they will make things a little more clear for everybody. As you can see, IE7 is fine. However, in IE8, the containers are offset to the left relative to their content, and in IE6 the list elements (with the rounded 1 pixel border) are a complete mess! Full size versions for IE8, IE7 and IE6 are also available

    Read the article

  • Is this text wrapping technique possible in CSS and jQuery?

    - by alex
    I have built a sliding text thing for a website. http://www.solomonadventures.com/~new/adventure-tours/seafari-tours/ The background contains the menu (on the right hand side), and when it originally loads, I have placed an element to make the text look like it is wrapping around the menu. Now, I have a sliding text thing I was asked to implement. The buttons to use it are currently in the top left corner. My question is, when I slide the content down, am I able to somehow make the text still wrap around it? This is all I have thought of so far (all with trade offs) Make the text appear beneath the menu - no need to wrap Make the text as narrow to the beginning of the menu - no need to wrap Manually place placeholders in the text that make it line break so it appears to wrap - not elegant (site uses a CMS too) Is there any jQuery selector I could write that would allow me to select the paragraph from top (once slid to the top) or the top most text node (so I could do an after() to place a new placeholder element to force it to wrap?) Any other solutions? Many thanks.

    Read the article

  • Battery drains even with app off screen, could it be Location Services doing it?

    - by John Jorsett
    I run my app, which uses GPS and Bluetooth, then hit the back button so it goes off screen. I verified via LogCat that the app's onDestroy was called. OnDestroy removes the location listeners and shuts down my app's Bluetooth service. I look at the phone 8 hours later and half the battery charge has been consumed, and my app was responsible according the phone's Battery Use screen. If I use the phone's Settings menu to Force Stop the app, this doesn't occur. So my question is: do I need to do something more than remove the listeners to stop Location Services from consuming power? That's the only thing I can think of that would be draining the battery to that degree when the app is supposedly dormant. Here's my onStart() where I turn on the location-related stuff and Bluetooth: @Override public void onStart() { super.onStart(); if(D_GEN) Log.d(TAG, "MainActivity onStart, adding location listeners"); // If BT is not on, request that it be enabled. // setupBluetooth() will then be called during onActivityResult if (!mBluetoothAdapter.isEnabled()) { Intent enableIntent = new Intent(BluetoothAdapter.ACTION_REQUEST_ENABLE); startActivityForResult(enableIntent, REQUEST_ENABLE_BT); // Otherwise, setup the Bluetooth session } else { if (mBluetoothService == null) setupBluetooth(); } // Define listeners that respond to location updates mLocationManager = (LocationManager) this.getSystemService(Context.LOCATION_SERVICE); mLocationManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, GPS_UPDATE_INTERVAL, 0, this); mLocationManager.addGpsStatusListener(this); mLocationManager.addNmeaListener(this); } And here's my onDestroy() where I remove them: public void onDestroy() { super.onDestroy(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, removing update listeners"); // Remove the location updates if(mLocationManager != null) { mLocationManager.removeUpdates(this); mLocationManager.removeGpsStatusListener(this); mLocationManager.removeNmeaListener(this); } if(D_GEN) Log.d(TAG, "MainActivity onDestroy, finished removing update listeners"); if(D_GEN) Log.d(TAG, "MainActivity onDestroy, stopping Bluetooth"); stopBluetooth(); if(D_GEN) Log.d(TAG, "MainActivity onDestroy finished"); }

    Read the article

  • C# Step by Step Execution

    - by Sheldon
    Hi. I'm building an app that uses and scanner API and a image to other format converter. I have a method (actually a click event) that do this: private void ButtonScan&Parse_Click(object sender, EventArgs e) { short scan_result = scanner_api.Scan(); if (scan_result == 1) parse_api.Parse(); // This will check for a saved image the scanner_api stores on disk, and then convert it. } The problem is that the if condition (scan_result == 1) is evaluated inmediatly, so it just don't work. How can I force the CLR to wait until the API return the convenient result. NOTE Just by doing something like: private void ButtonScan&Parse_Click(object sender, EventArgs e) { short scan_result = scanner_api.Scan(); MessageBox.Show("Result = " + scan_result); if (scan_result == 1) parse_api.Parse(); // This will check for a saved image the scanner_api stores on disk, and then convert it. } It works and display the results. Is there a way to do this, how? Thank you very much!

    Read the article

  • EditText items in a scrolling list lose their changes when scrolled off the screen

    - by ianww
    I have a long scrolling list of EditText items created by a SimpleCursorAdapter and prepopulated with values from an SQLite database. I make this by: cursor = db.rawQuery("SELECT _id, criterion, localweight, globalweight FROM " + dbTableName + " ORDER BY criterion", null); startManagingCursor(cursor); mAdapter = new SimpleCursorAdapter(this, R.layout.weight_edit_items, cursor, new String[]{"criterion","localweight","globalweight"}, new int[]{R.id.criterion_edit, R.id.localweight_edit, R.id.globalweight_edit}); this.setListAdapter(mAdapter); The scrolling list is several emulator screens long. The items display OK - scrolling through them shows that each has the correct value from the database. I can make an edit change to any of the EditTexts and the new text is accepted and displayed in the box. But...if I then scroll the list far enough to take the edited item off the screen, when I scroll back to look at it again its value has returned to what it was before I made the changes, ie. my edits have been lost. In trying to sort this out, I've done a getText to look at what's in the EditText after I've done my edits (and before a scroll) and getText returns the original text, even though the EditText is displaying my new text. It seems that the EditText has only accepted my edits superficially and they haven't been bound to the EditText, meaning they get dropped when scrolled off the screen. Can anyone please tell me what's going on here and what I need to do to force the EditText to retain its edits? Thanks Ian

    Read the article

  • How to Refresh / Reload a KML layer in OpenLayers. Dynamic KML Layer.

    - by Ozaki
    TLDR See my answer below on how to refresh the layer. So far I have tried action function as follows: function RefreshKMLData(layer) { layer.loaded = false; layer.setVisibility(true); layer.redraw({ force: true }); } set interval of the function: window.setInterval(RefreshKMLData, 5000, KMLLAYER); the layer itself: var KMLLAYER = new OpenLayers.Layer.Vector("MYKMLLAYER", { projection: new OpenLayers.Projection("EPSG:4326"), strategies: [new OpenLayers.Strategy.Fixed()], protocol: new OpenLayers.Protocol.HTTP({ url: MYKMLURL, format: new OpenLayers.Format.KML({ extractStyles: true, extractAttributes: true }) }) }); the url for KMLLAYER with Math random so it doesnt cache: var MYKMLURL = var currentanchorpositionurl = 'http://' + host + '/data?_salt=' + Math.random(); I would have thought that this would Refresh the layer. As by setting its loaded to false unloads it. Visibility to true reloads it and with the Math random shouldn't allow it to cache? So has anyone done this before or know how I can get this to work? TLDR See my answer below on how to refresh the layer.

    Read the article

  • how do I best create a set of list classes to match my business objects

    - by ken-forslund
    I'm a bit fuzzy on the best way to solve the problem of needing a list for each of my business objects that implements some overridden functions. Here's the setup: I have a baseObject that sets up database, and has its proper Dispose() method All my other business objects inherit from it, and if necessary, override Dispose() Some of these classes also contain arrays (lists) of other objects. So I create a class that holds a List of these. I'm aware I could just use the generic List, but that doesn't let me add extra features like Dispose() so it will loop through and clean up. So if I had objects called User, Project and Schedule, I would create UserList, ProjectList, ScheduleList. In the past, I have simply had these inherit from List< with the appropriate class named and then written the pile of common functions I wanted it to have, like Dispose(). this meant I would verify by hand, that each of these List classes had the same set of methods. Some of these classes had pretty simple versions of these methods that could have been inherited from a base list class. I could write an interface, to force me to ensure that each of my List classes has the same functions, but interfaces don't let me write common base functions that SOME of the lists might override. I had tried to write a baseObjectList that inherited from List, and then make my other Lists inherit from that, but there are issues with that (which is really why I came here). One of which was trying to use the Find() method with a predicate. I've simplified the problem down to just a discussion of Dispose() method on the list that loops through and disposes its contents, but in reality, I have several other common functions that I want all my lists to have. What's the best practice to solve this organizational matter?

    Read the article

  • MemoryFailPoint fires to early in WinXP 64

    - by msedi
    Hello, I have created a volume class (called VoxelVolume) with a self-organizing memory management, since the GC in C# didn't provide a good mechanism for managing contents of the volume for mapping, unmapping and remapping. Although I could have used the mechanisms of virtual memory, the problem is that the files are often too large to fit into the page file and I don't want to force the users to increase the pagefile size. Currently this system is working quite well and there is no problem in lacking resources and OutOfMemoryExceptions since the InsufficientMemoryException using the MemoryFailPoint works quite well. This was all testes on a 32bit WinXP system with 2GB of main memory. Running the same mechanism on 64bit system with 32GB of main memory also works well, but when the application runs the MemoryFailPoint suddenly throws an exception although 24GB of main memory are still free. Another point is when the MemoryFailPoint has fired once, it fires everytime and there is no chance to get rid of it. What I have read so far, that there is a small object and a large object heap (SOH and LOH). But only for the SOH the GC takes real care of and I can free the SOH from unused objects by applying GC.Collect() and GC.WaitForPendingFinalizers. The MemoryFailPoint is obviously the only way to get a little bit of control for the LOH, but since there is enough memory left on the system I see no reason why the MemoryFilePoint should fire. Is there any experience around here using the MemoryFailPoint? Thank you for your help Martin

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • If attacker has original data, and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require this extra table. I'm leaning towards the the one-way hash + extra table solution, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

  • Database layout for an application with geocoding features using geokit

    - by vooD
    I'm developing a real estate web catalogue and want to geocode every ad using geokit gem. My question is what would be the best database layout from the performance point if i want to make search by country, city of the selected country, administrative area or nearest metro station of the selected city. Available countries, cities, administrative areas and metro sations should be defined by the administrator of catalogue and must be validated by geocoding. I came up with single table: create_table "geo_locations", :force => true do |t| t.integer "geo_location_id" #parent geo location (ex. country is parent geo location of city t.string "country", :null => false #necessary for any geo location t.string "city", #not null for city geo location and it's children t.string "administrative_area" #not null for administrative_area geo location and it's children t.string "thoroughfare_name" #not null for metro station or street name geo location and it's children t.string "premise_number" #house number t.float "lng", :null => false t.float "lat", :null => false t.float "bound_sw_lat", :null => false t.float "bound_sw_lng", :null => false t.float "bound_ne_lat", :null => false t.float "bound_ne_lng", :null => false t.integer "mappable_id" t.string "mappable_type" t.string "type" #country, city, administrative area, metro station or address end Final geo location is address it contains all neccessary information to put marker of the real estate ad on the map. But i'm still stuck on search functionality. Any help would be highly appreciated.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >