Search Results

Search found 21212 results on 849 pages for 'apt key'.

Page 738/849 | < Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >

  • Why are symbols not frozen strings?

    - by Alex Chaffee
    I understand the theoretical difference between Strings and Symbols. I understand that Symbols are meant to represent a concept or a name or an identifier or a label or a key, and Strings are a bag of characters. I understand that Strings are mutable and transient, where Symbols are immutable and permanent. I even like how Symbols look different from Strings in my text editor. What bothers me is that practically speaking, Symbols are so similar to Strings that the fact that they're not implemented as Strings causes a lot of headaches. They don't even support duck-typing or implicit coercion, unlike the other famous "the same but different" couple, Float and Fixnum. The mere existence of HashWithIndifferentAccess, and its rampant use in Rails and other frameworks, demonstrates that there's a problem here, an itch that needs to be scratched. Can anyone tell me a practical reason why Symbols should not be frozen Strings? Other than "because that's how it's always been done" (historical) or "because symbols are not strings" (begging the question). Consider the following astonishing behavior: :apple == "apple" #=> false, should be true :apple.hash == "apple".hash #=> false, should be true {apples: 10}["apples"] #=> nil, should be 10 {"apples" => 10}[:apples] #=> nil, should be 10 :apple.object_id == "apple".object_id #=> false, but that's actually fine All it would take to make the next generation of Rubyists less confused is this: class Symbol < String def initialize *args super self.freeze end (and a lot of other library-level hacking, but still, not too complicated) See also: http://onestepback.org/index.cgi/Tech/Ruby/SymbolsAreNotImmutableStrings.red http://www.randomhacks.net/articles/2007/01/20/13-ways-of-looking-at-a-ruby-symbol Why does my code break when using a hash symbol, instead of a hash string? Why use symbols as hash keys in Ruby? What are symbols and how do we use them? Ruby Symbols vs Strings in Hashes Can't get the hang of symbols in Ruby

    Read the article

  • Cakephp how to use Set Class to make an assoc array?

    - by michael
    I have the output array from a $Model-find() query which also pulls data from a hasMany relationship: Array( [Parent] => Array ( [id] => 1 ) [Child] => Array ( [0] => Array ( [id] => aaa [score] => 3 [src] => stage6/tn~4bbb38cc-0018-49bf-96a9-11a0f67883f5.jpg [parent_id] => 1 ) [1] => Array ( [id] => bbb [score] => 5 [src] => stage0/tn~4bbb38cc-00ac-4b25-b074-11a0f67883f5.jpg [parent_id] => 1 ) [2] => Array ( [id] => ccc [score] => 2 [src] => stage4/tn~4bbb38cc-01c8-44bd-b71d-11a0f67883f5.jpg [parent_id] => 1 ) ) ) I'd like to transform this output into something like this, where the child id is the key to additional child attributes: Array( [aaa] => Array ( [score] => 3 [src] => stage6/tn~4bbb38cc-0018-49bf-96a9-11a0f67883f5.jpg ) [bbb] => Array ( [score] => 5 [src] => stage0/tn~4bbb38cc-00ac-4b25-b074-11a0f67883f5.jpg ) [ccc] => Array ( [score] => 2 [src] => stage4/tn~4bbb38cc-01c8-44bd-b71d-11a0f67883f5.jpg ) } Is there an easy way to use Set::extract, Set::combine, Set::insert, etc. to do this efficiently? I cannot figure it out.

    Read the article

  • How to check if JavaScript object is JSON

    - by Wei Hao
    I have a nested JSON object that I need to loop through, and the value of each key could be a String, JSON array or another JSON object. Depending on the type of object, I need to carry out different operations. Is there any way I can check the type of the object to see if it is a String, JSON object or JSON array? I tried using typeof and instanceof but both didn't seem to work, as typeof will return an object for both JSON object and array, and instanceof gives an error when I do obj instanceof JSON. To be more specific, after parsing the JSON into a JS object, is there any way I can check if it is a normal string, or an object with keys and values (from a JSON object), or an array (from a JSON array)? For example: JSON var data = {"hi": {"hello": ["hi1","hi2"] }, "hey":"words" } JavaScript var jsonObj = JSON.parse(data); var level1 = jsonObj.hi; var text = jsonObj.hey; var arr = level1.hello; //how to check if level1 was formerly a JSON object? //how to check if arr was formerly a JSON array? //how to check if text is a string?

    Read the article

  • Filtering on a left join in SQLalchemy

    - by Adam Ernst
    Using SQLalchemy I want to perform a left outer join and filter out rows that DO have a match in the joined table. I'm sending push notifications, so I have a Notification table. This means I also have a ExpiredDeviceId table to store device_ids that are no longer valid. (I don't want to just delete the affected notifications as the user might later re-install the app, at which point the notifications should resume according to Apple's docs.) CREATE TABLE Notification (device_id TEXT, time DATETIME); CREATE TABLE ExpiredDeviceId (device_id TEXT PRIMARY KEY, expiration_time DATETIME); Note: there may be multiple Notifications per device_id. There is no "Device" table for each device. So when doing SELECT FROM Notification I should filter accordingly. I can do it in SQL: SELECT * FROM Notification LEFT OUTER JOIN ExpiredDeviceId ON Notification.device_id = ExpiredDeviceId.device_id WHERE expiration_time == NULL But how can I do it in SQLalchemy? sess.query( Notification, ExpiredDeviceId ).outerjoin( (ExpiredDeviceId, Notification.device_id == ExpiredDeviceId.device_id) ).filter( ??? ) Alternately I could do this with a device_id NOT IN (SELECT device_id FROM ExpiredDeviceId) clause, but that seems way less efficient.

    Read the article

  • Should I use concrete Inheritance or not?

    - by Mez
    I have a project using Propel where I have three objects (potentially more in the future) Occasion Event extends Occasion Gig extends Occasion Occasion is an item that has the shared things, that will always be needed (Venue, start, end etc) With this - I want to be able to add in extra functionality, say for example, adding "Band" objects to the Gig object, or "Flyers" to an "Event" object. For this, I plan to create objects for these. However, without concrete inheritance, I have to have the foreign key point to the Occasion object - giving the (propel generated) functions for all of these extra bits to anything inherited from Occasion. I could, in theory do this without a foreign constraint, and add in functions to use the Peer or Query classes to get things related to the "Gig" or similar. Whereas with concrete inheritance, I would only have these functions in the things where they are. I think the decision here is whether I should Duck Type the objects (after all they are occasions) or whether I should just use the "Occasion" object as a "template" (only being used to search for things, like, all occasions at a venue) Thoughts? Comments?

    Read the article

  • How can a ListBoxItem property be set in Silverlight at runtime?

    - by sympatric greg
    Given this XAML, I need to resize the userControl in response to user input. How can I set a new width for ListBoxItem (or perhaps the StackPanel)? <ScrollViewer x:Name="ScrollViewer" Margin="0" BorderBrush="Transparent" Width="165" VerticalScrollBarVisibility="Auto" HorizontalScrollBarVisibility="Hidden"> <ListBox x:Name="AttributeListBox" ItemsSource="{Binding Attributes}" BorderBrush="Red" Width="160" Foreground="AntiqueWhite" Background="Transparent" IsEnabled="False" HorizontalAlignment="Stretch"> <ListBox.ItemContainerStyle> <Style TargetType="ListBoxItem"> <Setter Property="HorizontalAlignment" Value="Stretch"/> <Setter Property="Width" Value="150"/> <Setter Property="Margin" Value="0,-2,0,0"/> <Setter Property="HorizontalContentAlignment" Value="Left" /> <Setter Property="Template" Value="{StaticResource ListBoxItemSansFocus}" /> </Style> </ListBox.ItemContainerStyle> <ListBox.ItemTemplate> <DataTemplate> <StackPanel x:Name="ListBoxItemStackPanel" HorizontalAlignment="Stretch" Orientation="Vertical" > <TextBlock FontSize="10" Text="{Binding Key}" Foreground="White" FontWeight="Bold" HorizontalAlignment="Stretch" Margin="2,0,0,0" TextWrapping="Wrap"/> <TextBlock FontSize="10" Text="{Binding Value}" Foreground="White" Margin="6,-2,0,0" TextWrapping="Wrap"/> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> </ListBox> </ScrollViewer>

    Read the article

  • Accessing web.config from Sharepoint web part

    - by philj
    I have a VS 2008 web parts project - in this project is a web.config file: something like this: <?xml version="1.0"?> <configuration> <connectionStrings/> <system.web> <appSettings> <add key="MFOwner" value="Blah" /> </appSettings> ……. In my web part I am trying to access values in the appSetting section: I've tried all of the code below and each returns null: string Owner = ConfigurationManager.AppSettings.Get("MFOwner"); string stuff1 = ConfigurationManager.AppSettings["MFOwner"]; string stuff3 = WebConfigurationManager.AppSettings["MFOwner"]; string stuff4 = WebConfigurationManager.AppSettings.Get("MFOwner"); string stuff2 = ConfigurationManager.AppSettings["MFowner".ToString()]; I've tried this code I found: NameValueCollection sAll; sAll = ConfigurationManager.AppSettings; string a; string b; foreach (string s in sAll.AllKeys) { a = s; b = sAll.Get(s); } and stepped through it in debug mode - that is getting things like : FeedCacheTimer FeedPageURL FeedXsl1 ReportViewerMessages which is NOT coming from anything in my web.config file....maybe a config file in sharepoint itself? How do I access a web.config (or any other kind of config file!) local to my web part??? thanks, Phil J

    Read the article

  • Posting status via Facebook's graph api

    - by Simon R
    In PHP, I am trying to post a status to our Facebook fan page using the graph api, despite following the intructions facebook give, the following code does not seem to update the status. Here is the code; $xPost['access_token'] = "{key}"; $xPost['message'] = "Posting a message test."; $ch = curl_init('https://graph.facebook.com/{page_id}/feed'); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_HEADER, 1); curl_setopt($ch, CURLOPT_TIMEOUT, 120); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $xPost); curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 1); curl_setopt($ch, CURLOPT_CAINFO, NULL); curl_setopt($ch, CURLOPT_CAPATH, NULL); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 0); $result = curl_exec($ch); Does anyone know why this code is not working? The access_token is correct.

    Read the article

  • jquery ui dialog and our dearest friend, ie6

    - by bradjive
    I'm using the jquery ui dialog for a modal popup dialog. It's working great in Firefox/Chrome but terrible in ie6. Problem: When I show the dialog in ie6, the browser window grows and automatically scrolls down to the bottom. The height increase and automatic scroll-down is equal to the height of the jquery dialog. I can scroll up and then use the dialog as normal, but the behavior where it grows the window and drops is maddeningly unacceptable. Here is how I'm launching the window: <div id="dialogWindow"></div> ... $(document).ready(function() { var $dialog = $("#dialogWindow").dialog({ autoOpen: false, modal: true, minWidth: 560, width: 560, resizable: "true", position: "top" }); $('.addButton').click(function(e) { e.preventDefault(); $('#dialogWindow').load('http://myurl'); $dialog.dialog('open'); }); }); I am already using the bgiframe plugin for jquery which is key for ie6 overlay issues. But this seems unrelated to that. Has anyone seen this before and found a work around?

    Read the article

  • White space problem while using php proxy

    - by KCC
    Hi, I'm using a php web proxy with my URL already encoded and keep having getting a malformed request error once I have any text after a %20. Any idea why this would be happening? The web proxy code I'm using is just a sample that I took from yahoo services: <?php // PHP Proxy example for Yahoo! Web services. // Responds to both HTTP GET and POST requests // // Author: Jason Levitt // December 7th, 2005 // // Allowed hostname (api.local and api.travel are also possible here) define ('HOSTNAME', 'http://search.yahooapis.com/'); // Get the REST call path from the AJAX application // Is it a POST or a GET? $path = ($_POST['yws_path']) ? $_POST['yws_path'] : $_GET['yws_path']; $url = HOSTNAME.$path; // Open the Curl session $session = curl_init($url); // If it's a POST, put the POST data in the body if ($_POST['yws_path']) { $postvars = ''; while ($element = current($_POST)) { $postvars .= urlencode(key($_POST)).'='.urlencode($element).'&'; next($_POST); } curl_setopt ($session, CURLOPT_POST, true); curl_setopt ($session, CURLOPT_POSTFIELDS, $postvars); } // Don't return HTTP headers. Do return the contents of the call curl_setopt($session, CURLOPT_HEADER, false); curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Make the call $xml = curl_exec($session); // The web service returns XML. Set the Content-Type appropriately //header("Content-Type: text/xml"); echo $xml; curl_close($session); ?>

    Read the article

  • How to pass a function in a function?

    - by SoulBeaver
    That's an odd title. I would greatly appreciate it if somebody could clarify what exactly I'm asking because I'm not so sure myself. I'm watching the Stanford videos on Programming Paradigms(that teacher is awesome) and I'm up to video five when he started doing this: void *lSearch( void* key, void* base, int elemSize, int n, int (*cmpFn)(void*, void*)) Naturally, I thought to myself, "Oi, I didn't know you could declare a function and define it later!". So I created my own C++ test version. int foo(int (*bar)(void*, void*)); int bar(void* a, void* b); int main(int argc, char** argv) { int *func = 0; foo(bar); cin.get(); return 0; } int foo(int (*bar)(void*, void*)) { int c(10), d(15); int *a = &c; int *b = &d; bar(a, b); return 0; } int bar(void* a, void* b) { cout << "Why hello there." << endl; return 0; } The question about the code is this: it fails if I declare function int *bar as a parameter of foo, but not int (*bar). Why!? Also, the video confuses me in the fact that his lSearch definition void* lSearch( /*params*/ , int (*cmpFn)(void*, void*)) is calling cmpFn in the definition, but when calling the lSearch function lSearch( /*params*/, intCmp ); also calls the defined function int intCmp(void* elem1, void* elem2); and I don't get how that works. Why, in lSearch, is the function called cmpFn, but defined as intCmp, which is of type int, not int* and still works? And why does the function in lSearch not have to have defined parameters?

    Read the article

  • Adding iPod Support to (previously) iPhone Only App

    - by rjstelling
    When I started on my current project, there was already an App in the App Store. This App was iPhone only. My first task was to test and build a version that also ran on an iPod Touch. About 3 weeks ago Apple removed the option on iTunes connect to set the device requirements. And sent an email out to all developers: "The App Store requires that you provide metadata about your application before submitting it. While most of this metadata is specified using the iPhone Developer Program Portal, the process for selecting device-related dependencies in iTunes Connect is no longer available. Instead, if your app relies on features that are specific to a device, such as the compass on iPhone 3GS, add the UIRequiredDeviceCapabilities key to your app's Info.plist file to indicate the specific hardware feature required." When I compiled the iPod compatible version I set the device requirements (UIRequiredDeviceCapabilities) in the info.plist to: location-services (gps or skyhook) wi-fi (any device) However, as the App was originally uploaded and the option for "iPhone only" set in iTunes connect this appears to be the default. The kicker is, because Apple have removed this feature there is no way to change it! Has anyone come up against this problem? And how did you solve it? Is it possible I have incorrect values in UIRequiredDeviceCapabilities? UPDATE: The app will run fine on a iPod Touch if installed as a development version via Xcode. The problem is on the App Store it is listed as iPhone only and when iPod Touch users search in the App store no results are returned.

    Read the article

  • Simple encryption - Sum of Hashes in C

    - by Dogbert
    I am attempting to demonstrate a simple proof of concept with respect to a vulnerability in a piece of code in a game written in C. Let's say that we want to validate a character login. The login is handled by the user choosing n items, (let's just assume n=5 for now) from a graphical menu. The items are all medieval themed: eg: _______________________________ | | | | | Bow | Sword | Staff | |-----------|-----------|-------| | Shield | Potion | Gold | |___________|___________|_______| The user must click on each item, then choose a number for each item. The validation algorithm then does the following: Determines which items were selected Drops each string to lowercase (ie: Bow becomes bow, etc) Calculates a simple string hash for each string (ie: `bow = b=2, o=15, w=23, sum = (2+15+23=40) Multiplies the hash by the value the user selected for the corresponding item; This new value is called the key Sums together the keys for each of the selected items; this is the final validation hash IMPORTANT: The validator will accept this hash, along with non-zero multiples of it (ie: if the final hash equals 1111, then 2222, 3333, 8888, etc are also valid). So, for example, let's say I select: Bow (1) Sword (2) Staff (10) Shield (1) Potion (6) The algorithm drops each of these strings to lowercase, calculates their string hashes, multiplies that hash by the number selected for each string, then sums these keys together. eg: Final_Validation_Hash = 1*HASH(Bow) + 2*HASH(Sword) + 10*HASH(Staff) + 1*HASH(Shield) + 6*HASH(Potion) By application of Euler's Method, I plan to demonstrate that these hashes are not unique, and want to devise a simple application to prove it. in my case, for 5 items, I would essentially be trying to calculate: (B)(y) = (A_1)(x_1) + (A_2)(x_2) + (A_3)(x_3) + (A_4)(x_4) + (A_5)(x_5) Where: B is arbitrary A_j are the selected coefficients/values for each string/category x_j are the hash values for each string/category y is the final validation hash (eg: 1111 above) B,y,A_j,x_j are all discrete-valued, positive, and non-zero (ie: natural numbers) Can someone either assist me in solving this problem or point me to a similar example (ie: code, worked out equations, etc)? I just need to solve the final step (ie: (B)(Y) = ...). Thank you all in advance.

    Read the article

  • Best and simple way to handle JSON in Django

    - by primal
    Hi, As part of the application we are developing (with android client and Django server) a json object which contains user name and pass word is sent to server from android client as follows HttpPost post = new HttpPost(URL); /*Adding key value pairs */ json.put("username", un); json.put("password", pwd); StringEntity se = new StringEntity(json.toString()); post.setEntity(se); response = client.execute(post); The response is parsed like this result = responsetoString(response.getEntity().getContent()); //Converts response to String jObject = new JSONObject(result); JSONObject post = jObject.getJSONObject("post"); username = post.getString("username"); message = post.getString("message"); Hope upto this everything is fine. The problem comes when parsing or sending JSON responses in Django server. Whats the best way to do this? We tried using SimpleJSON and it turned out not to be so simple as we didn't find any good tutorials or sample code for the same? Are there any python functions similiar to get,put and opt in java for JSON? Any help would be much appreciated..

    Read the article

  • Hibernate: deletes not cascading for self-referencing entities

    - by jwaddell
    I have the following (simplified) Hibernate entities: @Entity @Table(name = "package") public abstract class Package { protected Content content; @ManyToOne(cascade = {javax.persistence.CascadeType.ALL}) @JoinColumn(name = "content_id") @Fetch(value = FetchMode.JOIN) public Content getContent() { return content; } public void setContent(Content content) { this.content = content; } } @Entity @Table(name = "content") public class Content { private Set<Content> subContents = new HashSet<Content>(); @ManyToMany(fetch = FetchType.EAGER) @JoinTable(name = "subcontents", joinColumns = {@JoinColumn(name = "content_id")}, inverseJoinColumns = {@JoinColumn(name = "elt")}) @Cascade(value = {org.hibernate.annotations.CascadeType.DELETE, org.hibernate.annotations.CascadeType.REPLICATE}) @Fetch(value = FetchMode.SUBSELECT) public Set<Content> getSubContents() { return subContents; } public void setSubContents(Set<Content> subContents) { this.subContents = subContents; } } So a Package has a Content, and a Content is self-referencing in that it has many sub-Contents (which may contain sub-Contents of their own etc). The relationships are required to be ManyToOne (Package to Content) and ManyToMany (Content to sub-Contents) but for the case I am currently testing each sub-Content only relates to one Package or Content. The problem is that when I delete a Package and flush the session, I get a Hibernate error stating that I'm violating a foreign key constraint on table subcontents, with a particular content_id still referenced from table subcontents. I've tried specifically (recursively) deleting the Contents before deleting the Package but I get the same error. Is there a reason why this entity tree is not being deleted properly?

    Read the article

  • WPF: How can I KEEP the same ItemTemplate instance once its created ??

    - by Samir Sabri
    Hello, Here is a cinario: I have a ListView, with ItemsSource = ProjectModel.Instance.PagesModelsCollection; where PagesModelsCollection is an ObservableCollection In the ListView XAML part: <ListView.ItemTemplate> <DataTemplate x:Name="PagesViewDataTemplate"> <DataTemplate.Resources> <Style x:Key="PageHostStyle" TargetType="{x:Type p:KPage}"> </Style> </DataTemplate.Resources> <StackPanel x:Name="MarginStack" Margin="50,50,50,50" > <p:KPage x:Name="PageHost" > </p:KPage> </StackPanel> </DataTemplate> </ListView.ItemTemplate> The problem is the ITemTemplate is re-created each time we refresh the Items. So, if we have 100 Item in the list view, another 100 new ItemTemplate instance will be created if we refresh the items! As a result, if we add UIElements on one of the ItemTemplate intances, those added UIElements will be lost, because the old ITemTemplate is replaced with a new one! How can I KEEP the ItemTemplate instance once its created ??

    Read the article

  • Bulk insert of component collection in Hibernate?

    - by edbras
    I have the mapping as listed below. When I update a detached Categories item (that doesn't contain any Hibernate class as it comes from a dto converter) I notice that Hibernate will first delete ALL employer wages instances (the collection link) and then insert ALL employer wage entries ONE-BY-ONE :(... I understand that it has to delete and then insert all entries as it was completely detached. BUT, what I don't understand, why is Hibernate NOT inserting all the entries through bulk-insert?.. That is: inserting all the employer wage entries all in one SQL statement ? How can I tell Hibernate to use bulk-insert? (if possible). I tried playing with the following value but didn't see any difference: hibernate.jdbc.batch_size=30 My mapping snippet: <class name="com.sample.CategoriesDefault" table="dec_cats" > <id name="id" column="id" type="string" length="40" access="property"> <generator class="assigned" /> </id> <component name="incomeInfoMember" class="com.sample.IncomeInfoDefault"> <property name="hasWage" type="boolean" column="inMemWage"/> ... <component name="wage" class="com.sample.impl.WageDefault"> <property name="hasEmployerWage" type="boolean" column="inMemEmpWage"/> ... <set name="employerWages" cascade="all-delete-orphan" lazy="false"> <key column="idCats" not-null="true" /> <one-to-many entity-name="mIWaEmp"/> </set> </component> </component> </class>

    Read the article

  • NHibernate, legacy database, foreign keys that aren't

    - by Joe
    The project I'm working on has a legacy database with lots of information in it that's used to alter application behavior. Basically I'm stuck with something that I have to be super careful about changing. Onto my problem. In this database is a table and in this table is a column. This column contains integers and most of the pre-existing data have a value of zero for this column. The problem is that this column is in fact a foreign key reference to another entity, it was just never defined as such in the database schema. Now in my new code I defined my Fluent-NHibernate mapping to treat this column as a Reference so that I don't have to deal with entity id's directly in my code. This works fine until I come across an entity that has a value of 0 in this column. NHibernate thinks that a value of 0 is a valid reference. When my code tries to use that referenced object I get an ObjectNotFoundException as obviously there is no object in my database with an id of 0. How can I, either through mapping or some kind of convention (I'm using Fluent-nhibernate), get NHibernate to treat id's that are 0 the same as if it was NULL?

    Read the article

  • Why does Hibernate 2nd level cache only cache within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy=true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • Slow Python HTTP server on localhost

    - by Abiel
    I am experiencing some performance problems when creating a very simple Python HTTP server. The key issue is that performance is varying depending on which client I use to access it, where the server and all clients are being run on the local machine. For instance, a GET request issued from a Python script (urllib2.urlopen('http://localhost/').read()) takes just over a second to complete, which seems slow considering that the server is under no load. Running the GET request from Excel using MSXML2.ServerXMLHTTP also feels slow. However, requesting the data Google Chrome or from RCurl, the curl add-in for R, yields an essentially instantaneous response, which is what I would expect. Adding further to my confusion is that I do not experience any performance problems for any client when I am on my computer at work (the performance problems are on my home computer). Both systems run Python 2.6, although the work computer runs Windows XP instead of 7. Below is my very simple server example, which simply returns 'Hello world' for any get request. from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class MyHandler(BaseHTTPRequestHandler): def do_GET(self): print("Just received a GET request") self.send_response(200) self.send_header("Content-type", "text/html") self.end_headers() self.wfile.write('Hello world') return def log_request(self, code=None, size=None): print('Request') def log_message(self, format, *args): print('Message') if __name__ == "__main__": try: server = HTTPServer(('localhost', 80), MyHandler) print('Started http server') server.serve_forever() except KeyboardInterrupt: print('^C received, shutting down server') server.socket.close() Note that in MyHandler I override the log_request() and log_message() functions. The reason is that I read that a fully-qualified domain name lookup performed by one of these functions might be a reason for a slow server. Unfortunately setting them to just print a static message did not solve my problem. Also, notice that I have put in a print() statement as the first line of the do_GET() routine in MyHandler. The slowness occurs prior to this message being printed, meaning that none of the stuff that comes after it is causing a delay.

    Read the article

  • Trying to convert existing production database table columns from enum to VARCHAR (Rails)

    - by dchua
    Hi everyone, I have a problem that needs me to convert my existing live production (I've duplicated the schema on my local development box, don't worry :)) table column types from enums to a string. Background: Basically, a previous developer left my codebase in absolute shit, migration versions are extremely out of date, and apparently he never used it after a certain point of time in development and now that I'm tasked with migrating a rails 1.2.6 app to 2.3.5, I can't get the tests to run properly on 2.3.5 because my table columns have ENUM column types and they convert to :string, :limit = 0 on my schema.rb which creates the problem of an invalid default value when doing a rake db:test:prepare, like in the case of: Mysql::Error: Invalid default value for 'own_vehicle': CREATE TABLE `lifestyles` (`id` int(11) DEFAULT NULL auto_increment PRIMARY KEY, `member_id` int(11) DEFAULT 0 NOT NULL, `own_vehicle` varchar(0) DEFAULT 'Y' NOT NULL, `hobbies` text, `sports` text, `AStar_activities` text, `how_know_IRC` varchar(100), `IRC_referral` varchar(200), `IRC_others` varchar(100), `IRC_rdrive` varchar(30)) ENGINE=InnoDB I'm thinking of writing a migration task that looks through all the database tables for columns with enum and replace it with VARCHAR and I'm wondering if this is the right way to approach this problem. I'm also not very sure how to write it such that it would loop through my database tables and replace all ENUM colum_types with a VARCHAR. References [1] https://rails.lighthouseapp.com/projects/8994/tickets/997-dbschemadump-saves-enum-columns-as-varchar0-on-mysql [2] http://dev.rubyonrails.org/ticket/2832

    Read the article

  • copy rows before updating them to preserve archive in Postgres

    - by punkish
    I am experimenting with creating a table that keeps a version of every row. The idea is to be able to query for how the rows were at any point in time even if the query has JOINs. Consider a system where the primary resource is books, that is, books are queried for, and author info comes along for the ride CREATE TABLE authors ( author_id INTEGER NOT NULL, version INTEGER NOT NULL CHECK (version > 0), author_name TEXT, is_active BOOLEAN DEFAULT '1', modified_on TIMESTAMP DEFAULT CURRENT_TIMESTAMP, PRIMARY KEY (author_id, version) ) INSERT INTO authors (author_id, version, author_name) VALUES (1, 1, 'John'), (2, 1, 'Jack'), (3, 1, 'Ernest'); I would like to be able to update the above like so UPDATE authors SET author_name = 'Jack K' WHERE author_id = 1; and end up with 2, 1, Jack, t, 2012-03-29 21:35:00 2, 2, Jack K, t, 2012-03-29 21:37:40 which I can then query with SELECT author_name, modified_on FROM authors WHERE author_id = 2 AND modified_on < '2012-03-29 21:37:00' ORDER BY version DESC LIMIT 1; to get 2, 1, Jack, t, 2012-03-29 21:35:00 Something like the following doesn't really work CREATE OR REPLACE FUNCTION archive_authors() RETURNS TRIGGER AS $archive_author$ BEGIN IF (TG_OP = 'UPDATE') THEN -- The following fails because author_id,version PK already exists INSERT INTO authors (author_id, version, author_name) VALUES (OLD.author_id, OLD.version, OLD.author_name); UPDATE authors SET version = OLD.version + 1 WHERE author_id = OLD.author_id AND version = OLD.version; RETURN NEW; END IF; RETURN NULL; -- result is ignored since this is an AFTER trigger END; $archive_author$ LANGUAGE plpgsql; CREATE TRIGGER archive_author AFTER UPDATE OR DELETE ON authors FOR EACH ROW EXECUTE PROCEDURE archive_authors(); How can I achieve the above? Or, is there a better way to accomplish this? Ideally, I would prefer to not create a shadow table to store the archived rows.

    Read the article

  • Google Web Toolkit or Microsoft Technology (Silverlight, ASP.NET)

    - by NativeByte
    We have a large code base in MFC and VB. A few applications are in .NET. All these applications interoperate with each other on the user's machine and also connect with Unix servers via sockets. Recently we have started discussing a re-write of our applications and possibility of moving a lot of these desktop applications to web (they would run in intranet). A straight forward way is rewritting them in one of the .NET technologies. But a suggestion about using Google Web tookit has popped up and the argument is that it would help creating applications that would run in a browser on both desktop and mobile devices. One of the key problem that I see is that GWT is a large abstraction over Javascript. This will require the team to learn GWT, Javascript, IDEs etc as their experience has been primarily Microsoft technologies and not Java. It would be easier for them to learn .NET technologies instead of GWT. I do not have a depth of GWT and its drawback pittfalls and do not know about a parallel Microsoft Technology that I should investigate. So I would appreciate if people here can share their views or experiences using GWT or equivalent Microsoft technology.

    Read the article

  • std::cin >> *aa results in a bus error

    - by Koning Baard XIV
    I have this a class called PPString: PPString.h #ifndef __CPP_PPString #define __CPP_PPString #include "PPObject.h" class PPString : public PPObject { char *stringValue[]; public: char *pointerToCharString(); void setCharString(char *charString[]); void setCharString(const char charString[]); }; #endif PPString.cpp #include "PPString.h" char *PPString::pointerToCharString() { return *stringValue; } void PPString::setCharString(char *charString[]) { *stringValue = *charString; } void PPString::setCharString(const char charString[]) { *stringValue = (char *)charString; } I'm trying to set the stringValue using std::cin: main.cpp PPString myString; myString.setCharString("LOLZ"); std::cout << myString.pointerToCharString() << std::endl; char *aa[1000]; std::cin >> *aa; myString.setCharString(aa); std::cout << myString.pointerToCharString() << std::endl; The first one, which uses a const char works, but the second one, with a char doesn't, and I get this output: copy and paste from STDOUT LOLZ im entering a string now... Bus error where the second line is what I entered, followed by pressing the return key. Can anyone help me fixing this? Thanks...

    Read the article

  • What are the options for overriding Django's cascading delete behaviour?

    - by Tom
    Django models generally handle the ON DELETE CASCADE behaviour quite adequately (in a way that works on databases that don't support it natively.) However, I'm struggling to discover what is the best way to override this behaviour where it is not appropriate, in the following scenarios for example: ON DELETE RESTRICT (i.e. prevent deleting an object if it has child records) ON DELETE SET NULL (i.e. don't delete a child record, but set it's parent key to NULL instead to break the relationship) Update other related data when a record is deleted (e.g. deleting an uploaded image file) The following are the potential ways to achieve these that I am aware of: Override the model's delete() method. While this sort of works, it is sidestepped when the records are deleted via a QuerySet. Also, every model's delete() must be overridden to make sure Django's code is never called and super() can't be called as it may use a QuerySet to delete child objects. Use signals. This seems to be ideal as they are called when directly deleting the model or deleting via a QuerySet. However, there is no possibility to prevent a child object from being deleted so it is not usable to implement ON CASCADE RESTRICT or SET NULL. Use a database engine that handles this properly (what does Django do in this case?) Wait until Django supports it (and live with bugs until then...) It seems like the first option is the only viable one, but it's ugly, throws the baby out with the bath water, and risks missing something when a new model/relation is added. Am I missing something? Any recommendations?

    Read the article

< Previous Page | 734 735 736 737 738 739 740 741 742 743 744 745  | Next Page >