Search Results

Search found 8536 results on 342 pages for 'indexed fields'.

Page 87/342 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • How can I write reusable Javascript?

    - by RenderIn
    I've started to wrap my functions inside of Objects, e.g.: var Search = { carSearch: function(color) { }, peopleSearch: function(name) { }, ... } This helps a lot with readability, but I continue to have issues with reusabilty. To be more specific, the difficulty is in two areas: Receiving parameters. A lot of times I will have a search screen with multiple input fields and a button that calls the javascript search function. I have to either put a bunch of code in the onclick of the button to retrieve and then martial the values from the input fields into the function call, or I have to hardcode the HTML input field names/IDs so that I can subsequently retrieve them with Javascript. The solution I've settled on for this is to pass the field names/IDs into the function, which it then uses to retrieve the values from the input fields. This is simple but really seems improper. Returning values. The effect of most Javascript calls tends to be one in which some visual on the screen changes directly, or as a result of another action performed in the call. Reusability is toast when I put these screen-altering effects at the end of a function. For example, after a search is completed I need to display the results on the screen. How do others handle these issues? Putting my thinking cap on leads me to believe that I need to have an page-specific layer of Javascript between each use in my application and the generic methods I create which are to be used application-wide. Using the previous example, I would have a search button whose onclick calls a myPageSpecificSearchFunction, in which the search field IDs/names are hardcoded, which marshals the parameters and calls the generic search function. The generic function would return data/objects/variables only, and would not directly read from or make any changes to the DOM. The page-specific search function would then receive this data back and alter the DOM appropriately. Am I on the right path or is there a better pattern to handle the reuse of Javascript objects/methods?

    Read the article

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • fixed layout in formview template

    - by Jeroen
    I have n pages with formviews, all sharing a similar layout inside their item/edit/insert templates. For example all item and edit templates have a header and body part inside where i put the fields. The header has a certain style and the body part too. My question is how can i enforce this style in all my formviews without repeating the same bulk css styles all the time. Now i'm using masterpages for this with multiple formviews on 1 page. That's not good i think. I want one 1 page for edit/insert/item and 1 formview. I would prefer somehow to define the style for edit template once and load it into every formview. Ofcourse not all the formviews have the same fields, so like masterpages i would like to have 'areas' where i can put my fields. The perfect way i suppose would be to have a formview span a complete masterpage based page including the contentplaceholders inside it's edit/insert/item templates. Any ideas are more than welcome. Edit: I read it's possible in .Net 4 using dynamic data. I'm stuck with 3.5 for the moment.

    Read the article

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Multiselect Form Field in PDF

    - by Jason R. Coombs
    Using PDF, is it possible to create a single form element with multiple fields of which several can be selected? For example, in HTML, one can create a set of checkboxes associated with the same field name: <div>Select one for Member of the School Board</div> <input type="checkbox" name="field(school)" value="vote1"> <span class="label">Libby T. Garvey</span><br/> <input type="checkbox" name="field(school)" value="vote2"> <span class="label">Emma N. Violand-Sanchez</span><br/> In this case, the field name is "field(school)", and when the form is submitted, "field(school)" can be supplied 0, 1, or 2 times. Is there an equivalent construct in PDF where a single field can have multiple values. So far in my investigation, it appears that if fields are assigned the same name, it is only possible to select one field. If it is possible to implement this in PDF, what is this construct called and how can it be implemented? Edit: To clarify, I am aware that a PDF can contain multiple form fields with different field names, and those can be selected independently, but then the grouping is implicit and not explicit as with the HTML form. I would like to use a construct that makes the grouping of options explicit, and preferably allows for restrictions (e.g. at least one required, no more than 2 allowed, etc).

    Read the article

  • Getter and Setter - POJO object - Problem with input data in Struts2

    - by andreimladin
    I have a problem with setter and getter method in struts2. I have a form : ... + all input fields of job/ and action: (addJob is mapped at this action) public class InsertJobAction extends ActionSupport{ ... private Job job = null; public String execute(){ jobService.insert(job); //here job is not null; that is ok } getter and setter for job } this action works correctly; I have a similar form and action, but the input fields from thisform are less than first form; The problem is here: in execute() of the second action job is null. Why?? Does depend it of fields noumber ?? I have 2 constructors in my Job class one with no params, and one with all params for every field of class; I made debug with Log4j ...and in first case there arrives in Job constructor in the second not. Why??When it calls constructor??? When are called the setter and getter methodsb, before or after execute() method??? And when i have a form with input data?? Are called setter methods before execute() method? I'm very confusely because in a case it works without problems, but in the second case it doesn't Thanks, Andrew

    Read the article

  • paypal address1 HTML name doensnt work?

    - by ajsie
    i use this code to send the customer to the paypals payment page: <form action="https://www.paypal.com/cgi-bin/webscr" method="post"> <input type='hidden' name='cmd' value='_cart' /> <input type='hidden' name='upload' value='1' /> <input type="hidden" name="business" value="[email protected]"> <input type="hidden" name="currency_code" value="SEK"> <input type="hidden" name="return" value="http://freelanceswitch.com/payment-complete/"> <input type="hidden" name="item_number_1" value="01 - General Payment"> <input type="hidden" name="item_name_1" size="45"> <input type="hidden" name="amount_1" size="45"> <input type="hidden" name="item_number_2" value="01 - Bonus Payment"> <input type="hidden" name="item_name_2" size="45"> <input type="hidden" name="amount_2" size="45"> <!-- PREPOPULATING FIELDS --> <input type='hidden' name='address1' value='Open Bridge street 19' /> <input type='hidden' name='address2' value='Easter heaven garden 12' /> <input type='hidden' name='first_name' value='Peter' /> <input type='hidden' name='last_name' value='Hansen' /> <input type="submit" name="Submit" value="Submit"> </form> everything works except the address1 and address2 in PREPOPULATING FIELDS. the fields for the Billing Address Line 1 and Billing Address Line 2 are empty. anyone knows why?

    Read the article

  • acts_as_solr isn't updating associated models in Rails

    - by Trey Bean
    I'm using acts_as_solr for searching in a project. Unfortunately, the index doesn't seem to be updated for the associated models when a model is saved. Example: I have three models: class Merchant < ActiveRecord::Base acts_as_solr :fields => [:name, :domain, :description], :include => [:coupons, :tags] ... end class Coupon < ActiveRecord::Base acts_as_solr :fields => [:store_name, :url, :code, :description] ... end class Tag < ActiveRecord::Base acts_as_solr :fields => [:name] ... end I use the following line to perform a search: Merchant.paginate_by_solr(params[:q], :per_page => PER_PAGE, :page => [(params[:page] || 1).to_i, 1].max) For some reason though, after I add a coupon that contains the word 'shoes' in the description, a query for 'shoes' doesn't return the merchant associated with the coupon. The association all work and if I run rake solr:reindex, the search then returns the new coupon. Do I need to update the index for Merchant each time a new coupon is created? Do I have to update the index for the whole class or can I just update the associated merchant? Shouldn't this be done automatically? Thanks for any input.

    Read the article

  • LinkedIn API returning extra/incorrect login prompt

    - by Paul Osetinsky
    I have a Rails application running the omniauth-linkedin gem and linkedin gem (essentialy an API wrapper). When a user logs in, they receive a primary login prompt that displays to them the correct scopes (FULL PROFILE and EMAIL ADDRESS), as below: However, after they log in, they get another login prompt that should not come up, and that ignores the initial scope request. It tells them that LinkedIN is only requesting their PROFILE OVERVIEW, which is incorrect: The problem must lie in my auth_controller, and I think it has do to with the url that is created in one of the authentication stages (definitely right after the user enters their LinkedIn authentication credentials). Here is my auth_controller: require 'linkedin' class AuthController < ApplicationController def auth client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) request_token = client.request_token(:oauth_callback => "http://#{request.host_with_port}/callback") session[:rtoken] = request_token.token session[:rsecret] = request_token.secret redirect_to client.request_token.authorize_url end def callback client = LinkedIn::Client.new(ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET']) if session[:atoken].nil? pin = params[:oauth_verifier] atoken, asecret = client.authorize_from_request(session[:rtoken], session[:rsecret], pin) session[:atoken] = atoken session[:asecret] = asecret @user = current_user @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' else client.authorize_from_access(session[:atoken], session[:asecret]) @user.uid = client.profile(:fields => ["id"]).id flash.now[:success] = 'Signed in with LinkedIn.' end @user = current_user @user.save redirect_to current_user end end Just in case, here is my omniauth.rb file that states the scopes I am requesting for my application: Rails.application.config.middleware.use OmniAuth::Builder do provider :linkedin, ENV['LINKEDIN_KEY'], ENV['LINKEDIN_SECRET'], :scope => 'r_fullprofile r_emailaddress', :fields => ['id', 'email-address', 'first-name', 'last-name', 'headline', 'industry', 'picture-url', 'public-profile-url', 'location', 'positions', 'educations'] end Can't figure out how to get rid of that second unnecessary and misleading prompt from LinkedIn and would appreciate any guidance! Thank you.

    Read the article

  • SQL Server Bulk insert of CSV file with inconsistent quotes

    - by mattstuehler
    Is it possible to BULK INSERT (SQL Server) a CSV file in which the fields are only OCCASSIONALLY surrounded by quotes? Specifically, quotes only surround those fields that contain a ",". In other words, I have data that looks like this (the first row contain headers): id, company, rep, employees 729216,INGRAM MICRO INC.,"Stuart, Becky",523 729235,"GREAT PLAINS ENERGY, INC.","Nelson, Beena",114 721177,GEORGE WESTON BAKERIES INC,"Hogan, Meg",253 Because the quotes aren't consistent, I can't use '","' as a delimiter, and I don't know how to create a format file that accounts for this. I tried using ',' as a delimter and loading it into a temporary table where every column is a varchar, then using some kludgy processing to strip out the quotes, but that doesn't work either, because the fields that contain ',' are split into multiple columns. Unfortunately, I don't have the ability to manipulate the CSV file beforehand. Is this hopeless? Many thanks in advance for any advice. By the way, i saw this post SQL bulk import from csv, but in that case, EVERY field was consistently wrapped in quotes. So, in that case, he could use ',' as a delimiter, then strip out the quotes afterwards.

    Read the article

  • Combo-box values automatically update

    - by glinch
    Hi all, hopefully somebody can help The table structure is as follows: tblCompany: compID compName tblOffice: offID, compID, add1, add2, add3 etc... tblEmployee: empID Name, telNo, etc... offID I have a form that contains contact details for employees, all works ok using after update. A cascading combo box, cmbComp, allows me to select a company, and inturn select the appropriate office, cboOff, and updates the corresponding tblEmployee.offID field correctly. Fields are automatically updated for the address also cmbComp: RowSource SELECT DISTINCT tblOffice.compID, tblCompany.compID FROM tblCompany INNER JOIN AdjusterCompanyOffice ON tblCompany.compID=tblOffice.compID ORDER BY tblCompany.compName; cboOff: RowSource SELECT tblCompany.offID, tblCompany.Address1, tblCompany.Address2, tblCompany.Address3, tblCompany.Address4, tblCompany.Address5 FROM tblCompany ORDER BY tblCompany.Address1; The problem I am having is that when i load a new record how to retrieve the data and automatically load the cmbComp and text fields. The cboOff combo box loads correctly as the control source for this is the offID I imagine there must be a way of setting the value on opening the record? Not sure how though. I dont think I can set the controlsource cmbComp or text fields, or can I? Any help/point in the right direction appreciated, have been searching for a way to do this but cant get anywhere!

    Read the article

  • JSON.Net and Linq

    - by user1745143
    I'm a bit of a newbie when it comes to linq and I'm working on a site that parses a json feed using json.net. The problem that I'm having is that I need to be able to pull multiple fields from the json feed and use them for a foreach block. The documentation for json.net only shows how to pull just one field. I've done a few variations after checking out the linq documentation, but I've not found anything that works best. Here's what I've got so far: WebResponse objResponse; WebRequest objRequest = HttpWebRequest.Create(url); objResponse = objRequest.GetResponse(); using (StreamReader reader = new StreamReader(objResponse.GetResponseStream())) { string json = reader.ReadToEnd(); JObject rss = JObject.Parse(json); var postTitles = from p in rss["feedArray"].Children() select (string)p["item"], //These are the fields I need to also query //(string)p["title"], (string)p["message"]; //I've also tried this with console.write and labeling the field indicies for each pulled field foreach (var item in postTitles) { lbl_slides.Text += "<div class='slide'><div class='slide_inner'><div class='slide_box'><div class='slide_content'></div><!-- slide content --></div><!-- slide box --></div><div class='rotator_photo'><img src='" + item + "' alt='' /></div><!-- rotator photo --></div><!-- slide -->"; } } Has anyone seen how to pull multiple fields from a json feed and use them as part of a foreach block (or something similar?

    Read the article

  • Programmatically Add Controls to WPF Form

    - by user210757
    I am trying to add controls to a UserControl dynamically (programatically). I get a generic List of objects from my Business Layer (retrieved from the database), and for each object, I want to add a Label, and a TextBox to the WPF UserControl and set the Position and widths to make look nice, and hopefully take advantage of the WPF Validation capabilities. This is something that would be easy in Windows Forms programming but I'm new to WPF. How do I do this (see comments for questions) Say this is my object: public class Field { public string Name { get; set; } public int Length { get; set; } public bool Required { get; set; } } Then in my WPF UserControl, I'm trying to create a Label and TextBox for each object: public void createControls() { List<Field> fields = businessObj.getFields(); Label label = null; TextBox textbox = null; foreach (Field field in fields) { label = new TextBox(); // HOW TO set text, x and y (margin), width, validation based upon object? // i have tried this without luck: // Binding b = new Binding("Name"); // BindingOperations.SetBinding(label, Label.ContentProperty, b); MyGrid.Children.Add(label); textbox = new TextBox(); // ??? MyGrid.Children.Add(textbox); } // databind? this.DataContext = fields; }

    Read the article

  • Explaining verity index and document search limits

    - by Ahmad
    As present, we currently have a CF8 standard edition server which have some limitations around verity indexing. According to Adobe Verity Server has the following document search limits (limits are for all collections registered to Verity Server): - 10,000 documents for ColdFusion Developer Edition - 125,000 documents for ColdFusion Standard Edition - 250,000 documents for ColdFusion Enterprise Edition We have now reached a stage where the server wide number of documents indexed exceed 125k. However, the largest verity collection consists of about 25k documents(and this is expected to grow). Only one collection is ever searched at a time. In my understanding, this means that I can still search an entire collection with no restrictions. Is this correct? Or does it mean that only documents that were indexed across all collection prior to reaching the limit are actually searchable? We are considering moving to CF9 standard as a solution to this and to use the Solr solution which has no restrictions. The coldfusionjedi highlights some differences between Verity and Solr. However, before we upgrade I am trying to gain a clearer understanding of this before we commit to an upgrade. Can someone provide me a clear explanation as to what this means and how it actually affects verity searching and indexing?

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • MS Source Server - source stream is apparently not there when viewing with srctool

    - by Tim Peel
    Hi, I have been playing around with the MS Source Server stuff in the MS Debugging Tools install. At present, I am running my code/pdbs through the Subversion indexing command, which is now running as expected. It creates the stream for a given pdb file and writes it to the pdb file. However when I use that DLL and associated pdb in visual studio 2008, it says the source code cannot be retrieved. If I check the pdb against srctool is says none of the source files contained are indexed, which is very strange as the process prior ran fine. If I check the stream that was generated from the svnindex.cmd run for the pdb, srctool says all source files are indexed. Why would there be a difference? I have opened the pdb file in a text editor and I can see the original references to the source files on my machine (also under the srcsrv header name) and the new "injected" source server links to my subversion repository). Should both references still exist in the pdb? I would have expected one to be removed? Either way, visual studio 2008 will not pick up my source references so I am a bit lost as to what to try next. As far as I can tell, I have done everything I should have. Anyone have similar experiences? Many thanks.

    Read the article

  • A few questions about a Rails model for a simple addressbook app

    - by user284194
    I have a Rails application that lists information about local services. My objectives for this model are as follows: 1. Require the name and tag_list fields. 2. Require one or more of the tollfreephone, phone, phone2, mobile, fax, email or website fields. 3. If the paddress field has a value, then encode it with the Geokit plugin. Here is my entry.rb model: class Entry < ActiveRecord::Base validates_presence_of :name, :tag_list validates_presence_of :tollfreephone or :phone or :phone2 or :mobile or :fax or :email or :website acts_as_taggable_on :tags acts_as_mappable :auto_geocode=>{:field=>:paddress, :error_message=>'Could not geocode physical address'} before_save :geocode_paddress validate :required_info def required_info unless phone or phone2 or tollfreephone or mobile or fax or email or website errors.add_to_base "Please have at least one form of contact information." end end private def geocode_paddress #if paddress_changed? geo=Geokit::Geocoders::MultiGeocoder.geocode (paddress) errors.add(:paddress, "Could not Geocode address") if ! geo.success self.lat, self.lng = geo.lat,geo.lng if geo.success #end end end Requiring name and tag_list work, but requiring one (or more) of the tollfreephone, phone, phone2, mobile, fax, email or website fields does not. As for encoding with Geokit, in order to save a record with the model I have to enter an address. Which is not the behavior I want. I would like it to not require the paddress field, but if the paddress field does have a value, it should encode the geocode. As it stands, it always tries to geocode the incoming entry. The commented out "if paddress_changed?" was not working and I could not find something like "if paddress_exists?" that would work. Help with any of these issues would be greatly appreciated. I posted three questions pertaining to my model because I'm not sure if they are preventing each other from working. Thank you for reading my questions.

    Read the article

  • Django-South introspection rule doesn't work.

    - by Ory Band
    I'm using Django 1.2.3 and South 0.7.3. I am trying to convert my app (named core) to use Django-South. I have a custom model/field that I'm using, named ImageWithThumbsField. It's basically just the ol' django.db.models.ImageField with some attributes such as height, weight, etc. While trying to ./manage.py convert_to_auth core I receieve South's freezing errors. I have no idea why, I'm Probably missing something... I am using a simple custom Model: from django.db.models import ImageField class ImageWithThumbsField(ImageField): def __init__(self, verbose_name=None, name=None, width_field=None, height_field=None, sizes=None, **kwargs): self.verbose_name=verbose_name self.name=name self.width_field=width_field self.height_field=height_field self.sizes = sizes super(ImageField, self).__init__(**kwargs) And this is my introspection rule, which I add to the top of my models.py: from south.modelsinspector import add_introspection_rules from lib.thumbs import ImageWithThumbsField add_introspection_rules( [ ( (ImageWithThumbsField, ), [], { "verbose_name": ["verbose_name", {"default": None}], "name": ["name", {"default": None}], "width_field": ["width_field", {"default": None}], "height_field": ["height_field", {"default": None}], "sizes": ["sizes", {"default": None}], }, ), ], ["^core/.fields/.ImageWithThumbsField",]) This is the errors I receieve: ! Cannot freeze field 'core.additionalmaterialphoto.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.photo' ! (this field has class lib.thumbs.ImageWithThumbsField) ! Cannot freeze field 'core.material.formulaimage' ! (this field has class lib.thumbs.ImageWithThumbsField) ! South cannot introspect some fields; this is probably because they are custom ! fields. If they worked in 0.6 or below, this is because we have removed the ! models parser (it often broke things). ! To fix this, read http://south.aeracode.org/wiki/MyFieldsDontWork Does anybody know why? What am I doing wrong?

    Read the article

  • al32utf8 in oracle and SQL Server and DB2 pulling data

    - by Bob
    I have a non-utf8 oracle database running on 11.1.0.7. We need to support greek characters. So we have two options: use nvarchar, nclob fields for those fields that need greek (it is not all fields). We have tested this and gotten it to work with java coding. convert Oracle to AL32UTF8 database. I am not asking how to do this. I got this from the Oracle Site/Oracle Support. I know what is involved, lossy data, etc, increasing the size of the database. My question is we have users to our system that connect to our database with database links but work on SQL Server and IBM DB2 databases. I do not have access to those databases and I do not have experience with them. If they are not in UTF-8 databases what happens when they pull UTF8 data? I would assume that English/Ascii characters are fine and the greek will end up as junk data. I also ran Oracle Character set scanner (oracle command line utility you use to get info about the affects of a character set conversion). It says that my database will crease in sizez by about 20%. Does this have an affect on users with 3rd party databases? These are customers of our data and there is a limit to how much access I can have to them to run tests. Any information you have would be welcome.

    Read the article

  • MySQL MyISAM table performance... painfully, painfully slow

    - by Salman A
    I've got a table structure that can be summarized as follows: pagegroup * pagegroupid * name has 3600 rows page * pageid * pagegroupid * data references pagegroup; has 10000 rows; can have anything between 1-700 rows per pagegroup; the data column is of type mediumtext and the column contains 100k - 200kbytes data per row userdata * userdataid * pageid * column1 * column2 * column9 references page; has about 300,000 rows; can have about 1-50 rows per page The above structure is pretty straight forwad, the problem is that that a join from userdata to page group is terribly, terribly slow even though I have indexed all columns that should be indexed. The time needed to run a query for such a join (userdata inner_join page inner_join pagegroup) exceeds 3 minutes. This is terribly slow considering the fact that I am not selecting the data column at all. Example of the query that takes too long: SELECT userdata.column1, pagegroup.name FROM userdata INNER JOIN page USING( pageid ) INNER JOIN pagegroup USING( pagegroupid ) Please help by explaining why does it take so long and what can i do to make it faster. Edit #1 Explain returns following gibberish: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE userdata ALL pageid 372420 1 SIMPLE page eq_ref PRIMARY,pagegroupid PRIMARY 4 topsecret.userdata.pageid 1 1 SIMPLE pagegroup eq_ref PRIMARY PRIMARY 4 topsecret.page.pagegroupid 1 Edit #2 SELECT u.field2, p.pageid FROM userdata u INNER JOIN page p ON u.pageid = p.pageid; /* 0.07 sec execution, 6.05 sec fecth */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE u ALL pageid 372420 1 SIMPLE p eq_ref PRIMARY PRIMARY 4 topsecret.u.pageid 1 Using index SELECT p.pageid, g.pagegroupid FROM page p INNER JOIN pagegroup g ON p.pagegroupid = g.pagegroupid; /* 9.37 sec execution, 60.0 sec fetch */ id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE g index PRIMARY PRIMARY 4 3646 Using index 1 SIMPLE p ref pagegroupid pagegroupid 5 topsecret.g.pagegroupid 3 Using where Moral of the story Keep medium/long text columns in a separate table if you run into performance problems such as this one.

    Read the article

  • how to 'scale' these three tables?

    - by iddqd
    I have the following Tables: Players id playerName Weapons id type otherData Weapons2Player id playersID_reference weaponsID_reference That was nice and simple. Now I need to SELECT items from the Weapons table, according to some of their characteristics that i previously just packed into the otherData column (since it was only needed on the client side). The problem is, that the types have varying characteristics - but also a lot of similar data. So I'm trying to decide on the following possibilities, all of which have their pros and cons. Solution A Kill the Weapons table, and create a new table for each Weapon-Type: Weapons_Swords id bladeType damage otherData Weapons_Guns id accuracy damage ammoType otherData But how will i Link these to the Players ? create Weapons_Swords2Players, Weapons_Guns2Players for each weapon-type? (Will result in a lot more JOINS when loading the player with all his weapons...and it's also more complicated to insert a new player) or add another column to Weapons2Players called WeaponsTypeTable, then do sub-selects to the correct Weapons sub-table (seems easier, but not really right, slightly easier insert i guess) Solution B Keep the Weapons table, and add all the fields i need to it. The Problem is that then there will be NULL fields, since not all Weapon-Types use all fields (can't be right) Weapons id type accuracy damage ammoType bladeType otherData This seems to be pretty basic stuff, but i just can't decide what's best. Or is there a correct Solution C? many thanks.

    Read the article

  • Stored insert procedure in plpgsql

    - by crazyphoton
    I want to do something like this in PostgreSQL. I tried this: CREATE or replace FUNCTION create_patient(_name text, _email text, _phone text, _password text, _field1 text, _field2 text, _field3 timestamp, _field4 text, OUT _pid integer, OUT _id integer) RETURNS record AS $$ DECLARE _id integer; _type text; _pid integer; BEGIN _type := 'patient'; INSERT into patients (name, email, phone, field1, field2, field3) values (_name, _email, _phone, _field1, _field2, _field3) RETURNING id into _pid; INSERT into users (username, password, type, pid, phone, language) values (_email, _password, _type, _pid, _phone, _field4) RETURNING id into _id; END; $$ LANGUAGE plpgsql; But there are a lot of instances where I would not want to specify some of field1/field2/field3/field4 and want the unspecified fields to use the default value in the table. Currently that is not possible, because to call this function I need to specify all fields. TLDR; Is there a simple way to create a wrapper procedure for INSERT in PL/pgSQL where I can specify which fields I want to insert?

    Read the article

  • chrome extension login security with iframe

    - by Weaver
    I should note, I'm not a chrome extension expert. However, I'm looking for some advice or high level solution to a security concern I have with my chrome extension. I've searched quite a bit but can't seem to find a concrete answer. The situation I have a chrome extension that needs to have the user login to our backend server. However, it was decided for design reasons that the default chrome popup balloon was undesirable. Thus I've used a modal dialog and jquery to make a styled popup that is injected with content scripts. Hence, the popup is injected into the DOM o the page you are visiting. The Problem Everything works, however now that I need to implement login functionality I've noticed a vulnerability: If the site we've injected our popup into knows the password fields ID they could run a script to continuously monitor the password and username field and store that data. Call me paranoid, but I see it as a risk. In fact,I wrote a mockup attack site that can correctly pull the user and password when entered into the given fields. My devised solution I took a look at some other chrome extensions, like Buffer, and noticed what they do is load their popup from their website and, instead, embed an iFrame which contains the popup in it. The popup would interact with the server inside the iframe. My understanding is iframes are subject to same-origin scripting policies as other websites, but I may be mistaken. As such, would doing the same thing be secure? TLDR To simplify, if I embedded an https login form from our server into a given DOM, via a chrome extension, are there security concerns to password sniffing? If this is not the best way to deal with chrome extension logins, do you have suggestions with what is? Perhaps there is a way to declare text fields that javascript can simply not interact with? Not too sure! Thank you so much for your time! I will happily clarify anything required.

    Read the article

  • Java - Class type from inside static initialization block

    - by DutrowLLC
    Is it possible to get the class type from inside the static initialization block? This is a simplified version of what I currently have:: class Person extends SuperClass { String firstName; static{ // This function is on the "SuperClass": // I'd for this function to be able to get "Person.class" without me // having to explicitly type it in but "this.class" does not work in // a static context. doSomeReflectionStuff(Person.class); // IN "SuperClass" } } This is closer to what I am doing, which is to initialize a data structure that holds information about the object and its annotations, etc... Perhaps I am using the wrong pattern? public abstract SuperClass{ static void doSomeReflectionStuff( Class<?> classType, List<FieldData> fieldDataList ){ Field[] fields = classType.getDeclaredFields(); for( Field field : fields ){ // Initialize fieldDataList } } } public abstract class Person { @SomeAnnotation String firstName; // Holds information on each of the fields, I used a Map<String, FieldData> // in my actual implementation to map strings to the field information, but that // seemed a little wordy for this example static List<FieldData> fieldDataList = new List<FieldData>(); static{ // Again, it seems dangerous to have to type in the "Person.class" // (or Address.class, PhoneNumber.class, etc...) every time. // Ideally, I'd liken to eliminate all this code from the Sub class // since now I have to copy and paste it into each Sub class. doSomeReflectionStuff(Person.class, fieldDataList); } }

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >