Search Results

Search found 14737 results on 590 pages for 'dynamic tables'.

Page 532/590 | < Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Why can't we just use a hash of passphrase as the encryption key (and IV) with symmetric encryption algorithms?

    - by TX_
    Inspired by my previous question, now I have a very interesting idea: Do you really ever need to use Rfc2898DeriveBytes or similar classes to "securely derive" the encryption key and initialization vector from the passphrase string, or will just a simple hash of that string work equally well as a key/IV, when encrypting the data with symmetric algorithm (e.g. AES, DES, etc.)? I see tons of AES encryption code snippets, where Rfc2898DeriveBytes class is used to derive the encryption key and initialization vector (IV) from the password string. It is assumed that one should use a random salt and a shitload of iterations to derive secure enough key/IV for the encryption. While deriving bytes from password string using this method is quite useful in some scenarios, I think that's not applicable when encrypting data with symmetric algorithms! Here is why: using salt makes sense when there is a possibility to build precalculated rainbow tables, and when attacker gets his hands on hash he looks up the original password as a result. But... with symmetric data encryption, I think this is not required, as the hash of password string, or the encryption key, is never stored anywhere. So, if we just get the SHA1 hash of password, and use it as the encryption key/IV, isn't that going to be equally secure? What is the purpose of using Rfc2898DeriveBytes class to generate key/IV from password string (which is a very very performance-intensive operation), when we could just use a SHA1 (or any other) hash of that password? Hash would result in random bit distribution in a key (as opposed to using string bytes directly). And attacker would have to brute-force the whole range of key (e.g. if key length is 256bit he would have to try 2^256 combinations) anyway. So either I'm wrong in a dangerous way, or all those samples of AES encryption (including many upvoted answers here at SO), etc. that use Rfc2898DeriveBytes method to generate encryption key and IV are just wrong.

    Read the article

  • Fix DB duplicate entries (MySQL bug)

    - by Silence
    I'm using MySQL 4.1. Some tables have duplicates entries that go against the constraints. When I try to group rows, MySQL doesn't recognise the rows as being similar. Example: Table A has a column "Name" with the Unique proprety. The table contains one row with the name 'Hach?' and one row with the same name but a square at the end instead of the '?' (which I can't reproduce in this textfield) A "Group by" on these 2 rows return 2 separate rows This cause several problems including the fact that I can't export and reimport the database. On reimporting an error mentions that a Insert has failed because it violates a constraint. In theory I could try to import, wait for the first error, fix the import script and the original DB, and repeat. In pratice, that would take forever. Is there a way to list all the anomalies or force the database to recheck constraints (and list all the values/rows that go against them) ? I can supply the .MYD file if it can be helpful.

    Read the article

  • ASP.net MVC Linq-To-SQL Extended Class Field Binding

    - by user336858
    Hi there, The short version of this question is "Is there a way to get automatic View Object binding for fields defined in a partial class for a Linq-To-SQL generated class?" Apologies if it's been asked before. Example Suppose I have a typical MVC setup with the tables: Posts {PostID, ...} Categories {CategoryID, ...} A post can have more than one category, and a category can identify more than one post. Thus suppose further that I need an extra table: PostCategories {PostID, CategoryID, ...} This handles the many-to-many relationship between posts and categories. As far as I know, there's no way to do this in Linq-to-SQL right now so I have to shoehorn it in by adding a partial Postclass to the project to add that functionality. Something like: public partial class Post { public IEnumerable<Category> Categories{ get { ... } set { ... } } } So here's my question: If a user is accessing my MVC application front-end and begins editing a Post object, they might enter an invalid category. When the server recognizes the invalid input, the usual practice is to return the faulty object to the original view for re-editing along with some error messages. The fields in the edit page are re-populated with the provided values. However I don't know how to get this mechanism to work with the properties I created with the partial class as shown above. Any terminology, links, or tips you can provide would be tremendously helpful!

    Read the article

  • How to map oracle timestamp to appropriate java type in hibernate?

    - by jschoen
    I am new to hibernate and I am stumped. In my database I have tables that have a columns of TIMESTAMP(6). I am using Netbeans 6.5.1 and when I generate the hibernate.reveng.xml, hbm.xml files, and pojo files it sets the columns to be of type Serializable. This is not what I expected, nor what I want them to be. I found this post on the hibernate forums saying to place: in the hibernate.reveng.xml file. In Netbeans you are not able to generate the mappings from this file (it creates a new one every time) and it does not seem to have the ability to re-generate them from the file either (at least according to this it is slated to be available in version 7). So I am trying to figure out what to do. I am more inclined to believe I am doing something wrong since I am new to this, and it seems like it would be a common problem for others. So what am I doing wrong? If I am not doing anything wrong, how do I work around this? I am using Netbeans 6.5, Oracle 10G, and I believe Hibernate 3 (it came with my netbeans). Edit: Meant to say I found this stackoverflow question, but it is really a different problem.

    Read the article

  • [LaTeX] positions of page numbers, position of chapter headings, chapters AND Table of Contents, Ref

    - by kaikanmonaco
    I am writing my PhD thesis (120+ pages) in latex, the deadline is approaching and I am struggling with layout problems. I am using the documentstyle book. I am posting both problems in this one thread because I am not sure if the solution might be related to both problems or not. Problems are: 1.) The page numbers are mostly located on the top-right of each page (this is correct and where I want them to be). However, only on the first page of chapters and on the first page of what I call "special chapters", the page number is located bottom-centered. With "special chapters" I mean: List of Contents, List of Figures, List of Tables, References, Index. My university will not accept the thesis like this. The page number must ALWAYS be top-right one each page, even if the page is the first page of a chapter or the first page of something like the List of Contents. How can I fix this? 2.) On the first page of chapters and "special chapters" (List of Contents...), the chapter title is located far too low on the page. This is the standard layout of LaTeX with documentstyle book I think. However, the chapter title must start at the very top of the page! I.e. the same height as the normal text on the pages that follow. I mean the chapter title, not the header. I.e., if there is a chapter called "Chapter 1 Dynamics of foobar under mechanical stress" then that text has to start from the top the page, but right now it starts several centimeters below the top. How can I fix this? Have tried all kinds of things to no effect, I'd be very thankful for a solution! Thanks.

    Read the article

  • Query returning related assets

    - by GMo
    I have 2 tables, one is an assets table which holds digital assets (e.g. article, images etc), the 2nd table is an asset_links table which maps 1-1 relationships between assets contained within the assets table. Here are the table definitions: Asset +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | source | varchar(255) | YES | | NULL | | | title | varchar(255) | YES | | NULL | | | date_created | datetime | YES | | NULL | | | date_embargo | datetime | YES | | NULL | | | date_expires | datetime | YES | | NULL | | | date_updated | datetime | YES | | NULL | | | keywords | varchar(255) | YES | | NULL | | | status | int(11) | YES | | NULL | | | priority | int(11) | YES | | NULL | | | fk_site | int(11) | YES | MUL | NULL | | | resource_type | varchar(255) | YES | | NULL | | | resource_id | int(11) | YES | | NULL | | | fk_user | int(11) | YES | MUL | NULL | | +---------------+--------------+------+-----+---------+----------------+ Asset_links +-----------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | asset_id1 | int(11) | YES | | NULL | | | asset_id2 | int(11) | YES | | NULL | | +-----------+---------+------+-----+---------+----------------+ In the asset_links table there are the following rows: 1 - 3, 1 - 4, 2 - 10, 2 - 56 I am looking to write one query which will return all assets which satisfy any asset search criteria and within the same query return all of the linked asset data for linked assets for that asset. e.g. The query returning assets 1 and 2 would return : Asset 1 attributes - Asset 3 attributes - Asset 4 attributes Asset 2 attributes - Asset 10 attributes - Asset 56 attributes What is the best way to write the query?

    Read the article

  • Grails - Where to store properties related to domains

    - by GalmWing
    This is something I have been struggling about for some time now. The thing is: I have many (20 or so) static arrays of values. I say static because that is how I'm actually storing them, as static arrays inside some domains. For example, if I have a list of known websites, I do: class Website { ... static websites = ["web1", "web2" ...] } But I do this just while developing, because I can easily change the arrays if needed, but what I'm going to do when the application is ready for deployment? In my project it is very probable that, at some point, these arrays of values change. I've been researching on that matter, one can store application properties inside an external .properties file, but it will be impossible to store an array, even futile, because if some array gets an additional value, the application can't recognize it until the name of the new property is added where needed. Another approach is to store this information in the database, but for some reason it seems like a waste to add 20 or more tables that will have just two rows, an id and a name. And the last option, as far as I know, would be an XML, but I'm not very experienced with those. It seems groovy has a way of creating and reading XML files relatively easy, but I don't know how difficult would be to modify an XML whose layout is predefined in the application. Needless to say that storing them in the config.groovy is not an option since any change will require to recompile. I haven't come across some "standard" (maybe a best practice?) way of dealing with these. So the questions is: Where to store these arrays?

    Read the article

  • Entity Framework 4 Entity with EntityState of Unchanged firing update

    - by Andy
    I am using EF 4, mapping all CUD operations for my entities using sprocs. I have two tables, ADDRESS and PERSON. A PERSON can have multiple ADDRESS associated with them. Here is the code I am running: Person person = (from p in context.People where p.PersonUID == 1 select p).FirstOrDefault(); Address address = (from a in context.Addresses where a.AddressUID == 51 select a).FirstOrDefault(); address.AddressLn2 = "Test"; context.SaveChanges(); The Address being updated is associated with the Person I am retrieveing - although they are not explicitly linked in any way in the code. When the context.SaveChanges() executes not only does the Update sproc for my Address entity get fired (like you would expect), but so does the Update sproc for the Person entity - even though you can see there was no change made to the Person entity. When I check the EntityState of both objects before the context.SaveChanges() call I see that my Address entity has an EntityState of "Modified" and my Person enity has an EntityState of "Unchanged". Why is the Update sproc being called for the Person entity? Is there a setting of some sort that I can set to prevent this from happening?

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

  • Postgresql Output column from another table

    - by muffin
    i'm using Postgresql, my question is specifically about querying from a table that's in another table and i'm really having trouble with this one. In fact, i'm absolutely mentally blocked. I'll try to define the relations of the tables as much as I can. I have a table entry which is like this: Each of the entries has a group_id; when they are 'advanced' to the next stage, the old entries are marked is_active = false, and a new assignment is done, so C & D are advanced stages of A & B. I have another table (which acts as a record keeper) , in which the storage_log_id refers to, this is the storage_log table : But then I have another table, to really find out where the entries are actually stored - storage table : To define my problem properly. Each entry has a storage_log_id (but some doesn't have yet), and a storage_log has a storage_id to refer to the actual table and find the storage label. The sql query i'm trying to do should output this one: Where The actual storage label is shown instead of the log id. This is so far what i've done: select e.id, e.group_id, e.name, e.stage, s.label from operational.entry e, operational.storage_log sl, operational.storage s where e.storage_log_id = sl.id and sl.storage_id = s.id But this just returns 3 rows, showing only the ones that have the seed_storage_log_id set; I should be able to see even those without logs, and especially the active ones. adding e.is_active = true to the condition makes the results empty. So, yeah i'm stuck. Need help, Thanks guys!

    Read the article

  • Large Product catalog with statistics - alternatives to Sql Server?

    - by Eric P
    I am building UI for a large product catalog (millions of products). I am using Sql Server, FreeText search and ASP.NET MVC. Tables are normalized and indexed. Most queries take less then a second to return. The issue is this. Let's say user does the search by keyword. On search results page I need to display/query for: First 20 matching products (paged, sorted) Total count of matching products for paging List of stores only of matching products List of brands only of matching products List of colors only of matching products Each query takes about .5 to 1 seconds. Altogether it is like 5 seconds. I would like to get the whole page to load under 1 second. There are several approaches: Optimize queries even more. I already spent a lot of time on this one, so not sure it can be pushed further. Load products first, then load the rest of the information using AJAX. More like a workaround. Will need to revise UI. Re-organize data to be more Report friendly. Already aggregated a lot of fields. I checked out several similar sites. For ex. zappos.com. Not only they display the same information as I would like in under 1 second, but they also include statistics (number of results in each category). The following is the search for keyword "white" http://www.zappos.com/white How do sites like zappos, amazon make their results, filters and stats appear almost instantly?

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • How to close and open access to SQL Server 2008 in Windows application?

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • Mixing together Connect by, inner join and sum with Oracle

    - by François
    Hey there, I need help with a oracle query. Excuse me in advance for my english. Here is my setup: I have 2 tables called respectively "tasks" and "timesheets". The "tasks" table is a recursive one, that way each task can have multiple subtasks. Each timesheet is associated with a task (not necessarily the "root" task) and contains the number of hours worked on it. Example: Tasks id:1 | name: Task A | parent_id: NULL id:2 | name: Task A1 | parent_id: 1 id:3 | name: Task A1.1 | parent_id: 2 id:4 | name: Task B | parent_id: NULL id:5 | name: Task B1 | parent_id: 4 Timesheets id:1 | task_id: 1 | hours: 1 id:2 | task_id: 2 | hours: 3 id:3 | task_id:3 | hours: 1 id:5 | task_id:5 | hours:1 ... What I want to do: I want a query that will return the sum of all the hours worked on a "task hierarchy". If we take a look at the previous example, It means I would like to have the following results: task A - 5 hour(s) | task B - 1 hour(s) At first I tried this SELECT TaskName, Sum(Hours) "TotalHours" FROM ( SELECT replace(sys_connect_by_path(decode(level, 1, t.name), '~'), '~') As TaskName, ts.hours as hours FROM tasks t INNER JOIN timesheets ts ON t.id=ts.task_id START WITH PARENTOID=-1 CONNECT BY PRIOR t.id = t.parent_id ) GROUP BY TaskName Having Sum(Hours) > 0 ORDER BY TaskName And it almost work. THe only problem is that if there are no timesheet for a root task, it will skip the whole hieararchy... but there might be timesheets for the child rows and it is exactly what happens with Task B1. I know it is the "inner join" part that is causing my problem but I'm not sure how can I get rid of it. Any idea how to solve this problem? Thank you

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • LINQ to SQL - database relationships won't update after submit

    - by Quantic Programming
    I have a Database with the tables Users and Uploads. The important columns are: Users -> UserID Uploads -> UploadID, UserID The primary key in the relationship is Users -> UserID and the foreign key is Uploads -> UserID. In LINQ to SQL, I do the following operations: Retrieve files var upload = new Upload(); upload.UserID = user.UserID; upload.UploadID = XXX; db.Uploads.InsertOnSubmit(upload) db.SubmitChanges(); If I do that and rerun the application (and the db object is re-built, of course) - if do something like this: foreach(var upload in user.Uploads) I get all the uploads with that user's ID. (like added in the previous example) The problem is, that my application, after adding an upload an submitting changes, doesn't update the user.Uploads collection. i.e - I don't get the newly added uploads. The user object is stored in the Session object. At first, I though that the LINQ to SQL Framework doesn't update the reference of the object, therefore I should simply "reset" the user object from a new SQL request. I mean this: Session["user"] = db.Users.Where(u => u.UserID == user.UserID).SingleOrDefault(); (Where user is the previous user) But it didn't help. Please note: After rerunning the application, user.Uploads does have the new upload! Did anyone experience this type of problem, or is it normal behavior? I am a newbie to this framework. I would gladly take any advice. Thank you!

    Read the article

  • Java: over-typed structures?

    - by HH
    Term over-type structure = a data structure that accepts different types, can be primitive or user-defined. I think ruby supports many types in structures such as tables. I tried a table with types 'String', 'char' and 'File' in Java but errs. How can I have over-typed structure in Java? How to show types in declaration? What about in initilization? Suppose a structure: INDEX VAR FILETYPE //0 -> file FILE //1 -> lineMap SizeSequence //2 -> type char //3 -> binary boolean //4 -> name String //5 -> path String Code import java.io.*; import java.util.*; public class Object { public static void print(char a) { System.out.println(a); } public static void print(String s) { System.out.println(s); } public static void main(String[] args) { Object[] d = new Object[6]; d[0] = new File("."); d[2] = 'T'; d[4] = "."; print(d[2]); print(d[4]); } } Errors Object.java:18: incompatible types found : java.io.File required: Object d[0] = new File("."); ^ Object.java:19: incompatible types found : char required: Object d[2] = 'T'; ^

    Read the article

  • Why does the WCF 3.5 REST Starter Kit do this?

    - by Brandon
    I am setting up a REST endpoint that looks like the following: [WebInvoke(Method = "POST", UriTemplate = "?format=json", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] and [WebInvoke(Method = "DELETE", UriTemplate = "?token={token}&format=json", ResponseFormat = WebMessageFormat.Json)] The above throws the following error: UriTemplateTable does not support '?format=json' and '?token={token}&format=json' since they are not equivalent, but cannot be disambiguated because they have equivalent paths and the same common literal values for the query string. See the documentation for UriTemplateTable for more detail. I am not an expert at WCF, but I would imagine that it should map first by the HTTP Method and then by the URI Template. It appears to be backwards. If both of my URI templates are: ?token={token}&format=json This works because they are equivalent and it then appears to look at the HTTP Method where one is POST and the other is DELETE. Is REST supposed to work this way? Why are the URI Template Tables not being sorted first by HTTP Method and then by URI Template? This can cause some serious frustrations when 1 HTTP Method requires a parameter and another does not, or if I want to do optional parameters (e.g. if the 'format' parameter is not passed, default to XML).

    Read the article

  • Kohana Auth Library Deployment

    - by Steve
    My Kohana app runs perfectly on my local machine. When I deployed my app to a server (and adjust the config files appropriately), I can no longer log into the app. I've traced through the app login routine on both my local version and the server version and they both agree with each other all the way through until you get to the auth.php controller logged_in() routine where suddenly, at line 140 - the is_object($this-user) test - the $user object no longer exists!?!?!? The login() function call that calls the logged_in() function successfully passes the following test, which causes a redirect to the logged_in() function. if(Auth::instance()->login($user, $post['password'])) Yes, the password and hash, etc all work perfectly. Here is the offending code: public function logged_in() { if ( ! is_object($this->user)) { // No user is currently logged in url::redirect('auth/login'); } etc... } As the code is the same between my local installation and the server, I reckon it must be some server setting that is messing with me. FYI: All the rest of the code works because I have a temporary backdoor available that allows me to use the application (view pages of tables, etc) without being logged in. Any ideas?

    Read the article

  • How can I improve this SQL to avoid several problems with its results?

    - by Josh Curren
    I am having some problems with trying to search. Currently this will only return results that have at least 1 row in the maintenance_parts table. I would like it to return results even if there are 0 parts rows. My second problem is that when you search for a vehicle and it should return multiple results (multiple maintenance rows) it will only return 1 result for that vehicle. Some Background Info: The user has 2 fields to fill out. The fields are vehicle and keywords. The vehicle field is meant to allow searching based on the make, model, VIN, truck number (often is 2 - 3 digits or a letter prefix followed by 2 digits), and a few other fields that belong to the truck table. The keywords are meant to search most fields in the maintenance and maintenance_parts tables (things like the description of the work, parts name, parts number). The maintenance_parts table can contain 0, 1, or more rows for each maintenance row. The truck table contains exactly 1 row for every maintenance row. A truck can have multiple maintenance records. "SELECT M.maintenance_id, M.some_id, M.type_code, M.service_date, M.mileage, M.mg_id, M.mg_type, M.comments, M.work_done, MATCH( M.comments, M.work_done) AGAINST( '$keywords' ) + MATCH( P.part_num, P.part_desc, P.part_ref) AGAINST( '$keywords' ) + MATCH( T.truck_number, T.make, T.model, T.engine, T.vin_number, T.transmission_number, T.comments) AGAINST( '$vehicle' ) AS score FROM maintenance M, maintenance_parts P, truck T WHERE M.maintenance_id = P.maintenance_id AND M.some_id = T.truck_id AND M.type_code = 'truck' AND ( (MATCH( T.truck_number, T.make, T.model, T.engine, T.vin_number, T.transmission_number, T.comments) AGAINST( '$vehicle' ) OR T.truck_number LIKE '%$vehicle%') OR MATCH( P.part_num, P.part_desc, P.part_ref) AGAINST( '$keywords' ) OR MATCH( M.comments, M.work_done) AGAINST( '$keywords' ) ) AND M.status = 'A' GROUP BY maintenance_id ORDER BY score DESC, maintenance_id DESC LIMIT 0, $limit"

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • How to configure a NSPopupButton for displaying multiple values in a TableView?

    - by jekmac
    Hi there! I'm using two entities A and B with to-many-to-many relationship. Lets say I got an entity A with attribute aAttrib and a to-many relationship aRelat to another entity B with attribute bAttrib and a to-many relationship bRelat with entity A. Now I am building an interface with two tables one for entity A and another for entity B. The table for entity B has two columns one for bAttrib and one for the relationship aRelat. The aRelat-column should be a NSPopupButtonCell to display multiple aAttrib values. I'd like to set all the bindings in InterfaceBuilder in Table Column Bindings: -- I have two NSArrayController each for one entity: Object Controller Mode:Entity Array Controller Bindings: Parameters Managed Object Context bind to File's Owner -- One Table Cloumn with a PopUpButtonCell: TableCloumnBindings Content bind to Entity A with ControllerKey arrangedObjects; Content Values bind to Entity A with ModelKeyPath aAttrib Selected Object bind to Entity B with ModelKeyPath bRelat I know that this configuration doesn't allow multiple value setting. But I don't know how to do the right one. Getting the following message: HIToolbox: ignoring exception 'Unacceptable type of value for to-many relationship: property = "bRelat"; desired type = NSSet; given type = NSCFString; value = testValue.' that raised inside Carbon event dispatch... Does anyone have any idea?

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Is it possible to load an entire SQL Server CE database into RAM?

    - by DanM
    I'm using LinqToSql to query a small SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table via a foreign key, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of the appeal of LinqToSql. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view.

    Read the article

< Previous Page | 528 529 530 531 532 533 534 535 536 537 538 539  | Next Page >