Search Results

Search found 52589 results on 2104 pages for 'read table'.

Page 410/2104 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Simple Select Statement on MySQL Database Hanging

    - by AlishahNovin
    I have a very simple sql select statement on a very large table, that is non-normalized. (Not my design at all, I'm just trying to optimize while simultaneously trying to convince the owners of a redesign) Basically, the statement is like this: SELECT FirstName, LastName, FullName, State FROM Activity Where (FirstName=@name OR LastName=@name OR FullName=@name) AND State=@state; Now, FirstName, LastName, FullName and State are all indexed as BTrees, but without prefix - the whole column is indexed. State column is a 2 letter state code. What I'm finding is this: When @name = 'John Smith', and @state = '%' the search is really fast and yields results immediately. When @name = 'John Smith', and @state = 'FL' the search takes 5 minutes (and usually this means the web service times out...) When I remove the FirstName and LastName comparisons, and only use the FullName and State, both cases above work very quickly. When I replace FirstName, LastName, FullName, and State searches, but use LIKE for each search, it works fast for @name='John Smith%' and @state='%', but slow for @name='John Smith%' and @state='FL' When I search against 'John Sm%' and @state='FL' the search finds results immediately When I search against 'John Smi%' and @state='FL' the search takes 5 minutes. Now, just to reiterate - the table is not normalized. The John Smith appears many many times, as do many other users, because there is no reference to some form of users/people table. I'm not sure how many times a single user may appear, but the table itself has 90 Million records. Again, not my design... What I'm wondering is - though there are many many problems with this design, what is causing this specific problem. My guess is that the index trees are just too large that it just takes a very long time traversing the them. (FirstName, LastName, FullName) Anyway, I appreciate anyone's help with this. Like I said, I'm working on convincing them of a redesign, but in the meantime, if I someone could help me figure out what the exact problem is, that'd be fantastic.

    Read the article

  • Setting the comment of a column to that of another column in Postgresql

    - by dland
    Suppose I create a table in Postgresql with a comment on a column: create table t1 ( c1 varchar(10) ); comment on column t1.c1 is 'foo'; Some time later, I decide to add another column: alter table t1 add column c2 varchar(20); I want to look up the comment contents of the first column, and associate with the new column: select comment_text from (what?) where table_name = 't1' and column_name = 'c1' The (what?) is going to be a system table, but after having looked around in pgAdmin and searching on the web I haven't learnt its name. Ideally I'd like to be able to: comment on column t1.c1 is (select ...); but I have a feeling that's stretching things a bit far. Thanks for any ideas. Update: based on the suggestions I received here, I wound up writing a program to automate the task of transferring comments, as part of a larger process of changing the datatype of a Postgresql column. You can read about that on my blog.

    Read the article

  • How should I define a composite foreign key for domain constraints in the presence of surrogate keys

    - by Samuel Danielson
    I am writing a new app with Rails so I have an id column on every table. What is the best practice for enforcing domain constraints using foreign keys? I'll outline my thoughts and frustration. Here's what I would imagine as "The Rails Way". It's what I started with. Companies: id: integer, serial company_code: char, unique, not null Invoices: id: integer, serial company_id: integer, not null Products: id: integer, serial sku: char, unique, not null company_id: integer, not null LineItems: id: integer, serial invoice_id: integer, not null, references Invoices (id) product_id: integer, not null, references Products (id) The problem with this is that a product from one company might appear on an invoice for a different company. I added a (company_id: integer, not null) to LineItems, sort of like I'd do if only using natural keys and serials, then added a composite foreign key. LineItems (product_id, company_id) references Products (id, company_id) LineItems (invoice_id, company_id) references Invoices (id, company_id) This properly constrains LineItems to a single company but it seems over-engineered and wrong. company_id in LineItems is extraneous because the surrogate foreign keys are already unique in the foreign table. Postgres requires that I add a unique index for the referenced attributes so I am creating a unique index on (id, company_id) in Products and Invoices, even though id is simply unique. The following table with natural keys and a serial invoice number would not have these issues. LineItems: company_code: char, not null sku: char, not null invoice_id: integer, not null I can ignore the surrogate keys in the LineItems table but this also seems wrong. Why make the database join on char when it has an integer already there to use? Also, doing it exactly like the above would require me to add company_code, a natural foreign key, to Products and Invoices. The compromise... LineItems: company_id: integer, not null sku: integer, not null invoice_id: integer, not null does not require natural foreign keys in other tables but it is still joining on char when there is a integer available. Is there a clean way to enforce domain constraints with foreign keys like God intended, but in the presence of surrogates, without turning the schema and indexes into a complicated mess?

    Read the article

  • Violating 1st normal form, is it okay for my purpose?

    - by Nick
    So I'm making a running log, and I have the workouts stored as entries in a table. For each workout, the user can add intervals (which consist of a time and a distance), so I have an array like this: [workout] => [description] => [comments] => ... [intervals] => [0] => [distance] => 200m [time] => 32 [1] => [distance] => 400m [time] => 65 ... I'm really tempted to throw the "intervals" array into serialize() or json_encode() and put it in an "intervals" field in my table, however this violates the principles of good database design (which, incidentally, I know hardly anything about). Is there any disadvantage to doing this? I never plan on querying my table based on the contents of "intervals". Creating a separate table just for intervals seems like a lot of unnecessary complexity, so if anyone with more experience has had a situation like this, what route did you take and how did it work out?

    Read the article

  • OleDbExeption Was unhandled in VB.Net

    - by ritch
    Syntax error (missing operator) in query expression '((ProductID = ?) AND ((? = 1 AND Product Name IS NULL) OR (Product Name = ?)) AND ((? = 1 AND Price IS NULL) OR (Price = ?)) AND ((? = 1 AND Quantity IS NULL) OR (Quantity = ?)))'. I need some help sorting this error out in Visual Basics.Net 2008. I am trying to update records in a MS Access Database 2008. I have it being able to update one table but the other table is just not having it. Private Sub Admin_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load 'Reads Users into the program from the text file (Located at Module.VB) ReadUsers() 'Connect To Access 2007 Database File con.ConnectionString = ("Provider=Microsoft.ACE.OLEDB.12.0;" & "Data Source=E:\Computing\Projects\Login\Login\bds.accdb;") con.Open() 'SQL connect 1 sql = "Select * From Clients" da = New OleDb.OleDbDataAdapter(sql, con) da.Fill(ds, "Clients") MaxRows = ds.Tables("Clients").Rows.Count intCounter = -1 'SQL connect 2 sql2 = "Select * From Products" da2 = New OleDb.OleDbDataAdapter(sql2, con) da2.Fill(ds, "Products") MaxRows2 = ds.Tables("Products").Rows.Count intCounter2 = -1 'Show Clients From Database in a ComboBox ComboBoxClients.DisplayMember = "ClientName" ComboBoxClients.ValueMember = "ClientID" ComboBoxClients.DataSource = ds.Tables("Clients") End Sub The button, the error appears on da2.update(ds, "Products") Private Sub Button4_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button4.Click Dim cb2 As New OleDb.OleDbCommandBuilder(da2) ds.Tables("Products").Rows(intCounter2).Item("Price") = ProductPriceBox.Text da2.Update(ds, "Products") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub However the code works on updating another table Private Sub UpdateButton_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles UpdateButton.Click 'Allows users to update records in the Database Dim cb As New OleDb.OleDbCommandBuilder(da) 'Changes the database contents with the content in the text fields ds.Tables("Clients").Rows(intCounter).Item("ClientName") = ClientNameBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientID") = ClientIDBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientAddress") = ClientAddressBox.Text ds.Tables("Clients").Rows(intCounter).Item("ClientTelephoneNumber") = ClientNumberBox.Text 'Updates the table withing the Database da.Update(ds, "Clients") 'Alerts the user that the Database has been updated MsgBox("Database Updated") End Sub

    Read the article

  • Reading CSV files in numpy where delimiter is ","

    - by monch1962
    Hello all, I've got a CSV file with a format that looks like this: "FieldName1", "FieldName2", "FieldName3", "FieldName4" "04/13/2010 14:45:07.008", "7.59484916392", "10", "6.552373" "04/13/2010 14:45:22.010", "6.55478493312", "9", "3.5378543" ... Note that there are double quote characters at the start and end of each line in the CSV file, and the "," string is used to delimit fields within each line. When I try to read this into numpy via: import numpy as np data = np.genfromtxt(csvfile, dtype=None, delimiter=',', names=True) all the data gets read in as string values, surrounded by double-quote characters. Not unreasonable, but not much use to me as I then have to go back and convert every column to its correct type When I use delimiter='","' instead, everything works as I'd like, except for the 1st and last fields. As the start of line and end of line characters are a single double-quote character, this isn't seen as a valid delimiter for the 1st and last fields, so they get read in as e.g. "04/13/2010 14:45:07.008 and 6.552373" - note the leading and trailing double-quote characters respectively. Because of these redundant characters, numpy assumes the 1st and last fields are both String types; I don't want that to be the case Is there a way of instructing numpy to read in files formatted in this fashion as I'd like, without having to go back and "fix" the structure of the numpy array after the initial read?

    Read the article

  • Behavior of a pipe after a fork()

    - by Steve Melvin
    When reading about pipes in Advanced Programming in the UNIX Environment, I noticed that after a fork that the parent can close() the read end of a pipe and it doesn't close the read end for the child. When a process forks, does its file descriptors get retained? What I mean by this is that before the fork the pipe read file descriptor had a retain count of 1, and after the fork 2. When the parent closed its read side the fd went to 1 and is kept open for the child. Is this essentially what is happening? Does this behavior also occur for regular file descriptors?

    Read the article

  • Formula parsing / evaluation routine or library with generic DLookup functionality

    - by tbone
    I am writing a .Net application where I must support user-defined formulas that can perform basic mathematics, as well as accessing data from any arbitrary table in the database. I have the math part working, using JScript Eval(). What I haven't decided on is what a nice way is to do the generic table lookups. For example, I may have a formula something like: Column: BonusAmount Formula: {CurrentSalary} * 1.5 * {[SystemSettings][Value][SettingName=CorpBonus AND Year={Year}]} So, in this example I would replace {xxx} and {Year} with the value of Column xxx from the current table, and I would replace the second part with the value of (select Value from SystemSettings WHERE SettingName='CorpBonus' AND Year=2008) So, basically, I am looking for something very much like the MS Access DLookup function: DLookup ( expression, domain, [criteria] ) DLookup("[UnitPrice]", "Order Details", "OrderID = 10248") But, I also need to overall parsing routine that can tell whether to just look up in the current row, or to look into another table. Would also be nice to support aggregate functions (ie: DAvg, DMax, etc), as well as all the weird edge cases handled. So I wonder if anyone knows of any sort of an existing library, or has a nice routine that can handle this formula parsing and database lookup / aggregate function resolution requirements.

    Read the article

  • How to keep track of a private messaging system using MongoDB?

    - by luckytaxi
    Take facebook's private messaging system where you have to keep track of sender and receiver along w/ the message content. If I were using MySQL I would have multiple tables, but with MongoDB I'll try to avoid all that. I'm trying to come up with a "good" schema that can scale and is easy to maintain. If I were using mysql, I would have a separate table to reference the user and and message. See below ... profiles table user_id first_name last_name message table message_id message_body time_stamp user_message_ref table user_id (FK) message_id (FK) is_sender (boolean) With the schema listed above, I can query for any messages that "Bob" may have regardless if he's the recipient or sender. Now how to turn that into a schema that works with MongoDB. I'm thinking I'll have a separate collection to hold the messages. Problem is, how can I differentiate between the sender and the recipient? If Bob logs in, what do I query against? Depending on whether Bob initiated the email, I don't want to have to query against "sender" and "receiver" just to see if the message belongs to the user.

    Read the article

  • jQuery Datatables throws error when dynamically created row headers

    - by JM4
    I am using the Datatables jquery plugin for one of my projects. For one in particular, the number of columns can vary based on how many children a consumer has (yes I realize normalization and proper technique would insert on another row but it is a client requirement). Datatables must be set up as such: <table> <thead> <tr> <th></th> </tr> </thead> <tbody> <tr> <td></td> </tr> </tbody> </table> my script starts out as: <table cellpadding="0" cellspacing="0" border="0" class="display" id="sortable"> <thead> <tr> <th>parent name</th> <th>parent phone</th> <?php try { $db->beginTransaction(); $stmt = $db->prepare("SELECT max(num_deps) FROM (SELECT count(a.id) as num_deps FROM children a INNER JOIN parents b USING(id) WHERE a.id !=0 GROUP BY a.id) x"); $stmt->execute(); $rows = $stmt->fetchAll(); for($i=1; $i<=$rows[0][0]; $i++) { echo " <th>Child Name ".$i."</th> <th>Date of Birth ".$i."</th> "; } $db->commit(); } catch (PDOException $e) { echo "<p align='center'>There was a system error. Please contact administration.<br>".$e->getMessage()."</p><br />"; } ?> </tr> </thead> In this manner, the final column headers can be 1 or 50 spots long. However, with this dynamic code in place, datatables throws the following error: ""DataTables warning (table id = 'datatable'): Cannot reinitialise DataTable. To retrieve the DataTables object for this table, please pass either no arguments to the dataTable() function, or set bRetrive to true. Alternativly, to destroy old table and create a new one...ETC."' Yes I have set "bRetrieve" : true in the javascript above and that does not do the trick. If I remove the code above, the file "works" fine but it leaves off the necessary columns for my table. Any ideas? Displaying JS <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.6/jquery-ui.min.js"></script> <script type="text/javascript" src="../media/js/jquery.dataTables.min.js"></script> <script type="text/javascript" src="../media/js/TableTools/TableTools.js"></script> <script type="text/javascript" src="../media/ZeroClipboard/ZeroClipboard.js"></script> <script type="text/javascript"> $(document).ready(function() { TableToolsInit.sSwfPath = "../media/swf/ZeroClipboard.swf"; oTable = $('#sortable').dataTable({ "bRetrieve": true, "bProcessing": true, "sScrollX": "100%", "sScrollXInner": "110%", "bScrollCollapse": true, "bJQueryUI": true, "sPaginationType": "full_numbers", "sDom": 'T<"clear"><"fg-toolbar ui-widget-header ui-corner-tl ui-corner-tr ui-helper-clearfix"lfr>t<"fg-toolbar ui-widget-header ui-corner-bl ui-corner-br ui-helper-clearfix"ip>' }); }); </script> </head> TOP piece of HTML <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Home</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link rel="stylesheet" type="text/css" href="style.css" /> <link rel="stylesheet" type="text/css" href="default.css" /> <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js"></script> <style type="text/css" title="currentStyle"> @import "TableTools.css"; @import "demo_table_jui.css"; @import "jquery-ui-1.8.4.custom.css"; </style> <script type="text/javascript" src="//ajax.googleapis.com/ajax/libs/jqueryui/1.8.6/jquery-ui.min.js"></script> <script type="text/javascript" src="js/jquery.dataTables.min.js"></script> <script type="text/javascript" src="js/TableTools/TableTools.js"></script> <script type="text/javascript" src="ZeroClipboard/ZeroClipboard.js"></script> <script type="text/javascript"> $(document).ready(function() { TableToolsInit.sSwfPath = "ZeroClipboard.swf"; oTable = $('#sortable').dataTable({ "bRetrieve": true, "bProcessing": true, "sScrollX": "100%", "sScrollXInner": "110%", "bScrollCollapse": true, "bJQueryUI": true, "sPaginationType": "full_numbers", "sDom": 'T<"clear"><"fg-toolbar ui-widget-header ui-corner-tl ui-corner-tr ui-helper-clearfix"lfr>t<"fg-toolbar ui-widget-header ui-corner-bl ui-corner-br ui-helper-clearfix"ip>' }); }); </script> </head> <body bgcolor="#e0e0e0"> <div class="main"> <div class="body"> <div class="body_resize"> <div class="liquid-round"> <div class="top"><span><h2>Details</h2></span></div> <div class="center-content"> <div style="overflow-x:hidden; min-height:400px; max-height:600px; overflow-y:auto;"> <div class="demo_jui"><br /> <table cellpadding="0" cellspacing="0" border="0" class="display" width="100%" id="sortable"> <thead> <tr> <th>First Name</th> <th>MI</th> <th>Last Name</th> <th>Street Address</th> <th>City</th> <th>State</th> <th>Zip</th> <th>DOB</th> <th>Gender</th> <th>Spouse Name</th> <th>Spouse Date of Birth</th> <!-- this part is generated with the php, when removed, datatables works just fine with the rest of the page --> <th>Dependent Child Name 1</th> <th>Dependent Date of Birth 1</th> <th>Dependent Child Name 2</th> <th>Dependent Date of Birth 2</th> <th>Dependent Child Name 3</th> <th>Dependent Date of Birth 3</th> <th>Dependent Child Name 4</th> <th>Dependent Date of Birth 4</th> <th>Dependent Child Name 5</th> <th>Dependent Date of Birth 5</th> <th>Dependent Child Name 6</th> <th>Dependent Date of Birth 6</th> <th>Dependent Child Name 7</th> <th>Dependent Date of Birth 7</th> </tr> </thead> <tbody> <tr> ... UPDATE REGARDING COMMENTS/ANSWERS I have received a number of responses indicating the number of headers may not match the field count in the body. As I mention below, eliminating the php script below altogether would eliminate 5+ fields in the header and without question throw the count match off balance. This DOES NOT however cause an error and in fact "resolves" the issue in that datatables functions properly (even though there is NO header record for 5+ fields in the body.

    Read the article

  • RegEx Help in Ruby

    - by Akash
    My sample file is like below: H343423 Something1 Something2 C343423 0 A23423432 asdfasdf sdfs #2342323 I have the following regex: if (line =~ /^[HC]\d+\s/) != nil puts line end Basically I want to read everything that starts with H or C and is followed by numbers and I want to stop reading when space is encountered (I want to read one word). Output I want is: H343423 C343423 Output my RegEx is getting is: H343423 Something1 Something2 C343423 0 So it is fetching the whole line but I just want it to stop after first word is read. Any help?

    Read the article

  • Load a file in a group objective-c Xcode

    - by okami
    I'd like to load a file from a specific Group in Xcode/Objective-c for example: I have TWO files named "SomeFile.txt" that are in two different folders (folders not groups yet) in the OS: SomeFolderOne |- SomeFile.txt SomeFolderTwo |- SomeFile.txt Inner Xcode I make two folders, and I put a REFERENCE to these two files: SomeGroupOne |- SomeFile.txt // this file is a reference to the SomeFile.txt from SomeFolderOne SomeGroupTwo |- SomeFile.txt // this file is a reference to the SomeFile.txt from SomeFolderTwo Now I want to read the txt content with: NSString *contents = [NSString stringWithContentsOfFile:@"SomeFile.tx" encoding:NSUTF8StringEncoding error:nil]; Ok it reads the 'SomeFile.txt' but sometimes the file read is from SomeGroupOne and sometimes the file is read from SomeGroupTwo. How to specify the group I want the file to be read?

    Read the article

  • Skipping one item in the column

    - by zurna
    I created a simple news website. I store both videos and images in IMAGES table. Videos added have videos and images added have images stored in a column called ImagesType. Images and Videos attached to a news is stored in ImagesID column of the NEWS table. My problem occurs when I need to display the first image of a news. i.e. IMAGES table: ImagesID ImagesLgURL ImagesType 1 /FLPM/media/videos/0H7T9C0F.flv videos 2 /FLPM/media/images/8R5D7M8O.jpg images 3 /FLPM/media/images/0E7Q9Z0C.jpg images NEWS table NewsID ImagesID NewsTitle 1 1;2; Street Chic: Paris ERROR 2 3; Paris Runway NO ERROR The following code give me an error with the 2nd news item because the first ImageID stored in the list is not an image but a video. I need to figure out a way to skip the video item and display the next image. I hope I made sense. SQL = "SELECT NEWSID, CATEGORIESID, IMAGESID, NEWSTITLE, NEWSSHORTDESC, NEWSACTIVE, NEWSDATEENTERED" SQL = SQL & " FROM NEWS N" SQL = SQL & " WHERE NEWSACTIVE = 1" SQL = SQL & " ORDER BY NEWSDATEENTERED DESC" Set objNews = objConn.Execute(SQL) Do While intLooper1 <= 3 And Not objNews.EOF IMAGES = Split(Left(objNews("IMAGESID"),Len(objNews("IMAGESID"))-1), ";") SQL = "SELECT ImagesID, ImagesName, ImagesLgURL, ImagesSmURL, ImagesType" SQL = SQL & " FROM IMAGES I" SQL = SQL & " WHERE ImagesID = " & IMAGES(0) & " AND ImagesType = 'images'" Set objLgImage = objConn.Execute(SQL) <div> <a href="?Section=news&SubSection=redirect&NEWSID=<%=objNews("NEWSID")%>"> <img src="<%=objLgImage("ImagesLgURL")%>" alt="<%=objLgImage("ImagesName")%>" /> </a> </div> <% objLgImage.Close Set objLgImage = Nothing intLooper1 = intLooper1 + 1 objNews.MoveNext Loop %>

    Read the article

  • How do I hide an inherited __published property in the derived class in a VCL component?

    - by Gary Benade
    I have created a new VCL component based on an existing VCL component. What I want to do now is set the Password and Username properties from an ini file instead of the property inspector. Robert Dunn Link I read on the delphi forum above you cannot unpublish a property and that the only workaround is to redeclare the property as read-only. I tried this but it all it does is make the property read only and grayed out in the object inspector. While this could work I would prefer if the property wasn't visible at all. __property System::UnicodeString Password = {read=FPassword}; Thanks in advance for any help or links to c++ VCL component writing tutorials. I am using CB2010

    Read the article

  • SQL Server - Multi-Column substring matching

    - by hamlin11
    One of my clients is hooked on multi-column substring matching. I understand that Contains and FreeText search for words (and at least in the case of Contains, word prefixes). However, based upon my understanding of this MSDN book, neither of these nor their variants are capable of searching substrings. I have used LIKE rather extensively (Select * from A where A.B Like '%substr%') Sample table A: ID | Col1 | Col2 | Col3 | ------------------------------------- 1 | oklahoma | colorado | Utah | 2 | arkansas | colorado | oklahoma | 3 | florida | michigan | florida | ------------------------------------- The following code will give us row 1 and row 2: select * from A where Col1 like '%klah%' or Col2 like '%klah%' or Col3 like '%klah%' This is rather ugly, probably slow, and I just don't like it very much. Probably because the implementations that I'm dealing with have 10+ columns that need searched. The following may be a slight improvement as code readability goes, but as far as performance, we're still in the same ball park. select * from A where (Col1 + ' ' + Col2 + ' ' + Col3) like '%klah%' I have thought about simply adding insert, update, and delete triggers that simply add the concatenated version of the above columns into a separate table that shadows this table. Sample Shadow_Table: ID | searchtext | --------------------------------- 1 | oklahoma colorado Utah | 2 | arkansas colorado oklahoma | 3 | florida michigan florida | --------------------------------- This would allow us to perform the following query to search for '%klah%' select * from Shadow_Table where searchtext like '%klah%' I really don't like having to remember that this shadow table exists and that I'm supposed to use it when I am performing multi-column substring matching, but it probably yields pretty quick reads at the expense of write and storage space. My gut feeling tells me there there is an existing solution built into SQL Server 2008. However, I don't seem to be able to find anything other than research papers on the subject. Any help would be appreciated.

    Read the article

  • SQL Stored Queries - use result of query as boolean based on existence of records

    - by Christian Mann
    Just getting into SQL stored queries right now... anyway, here's my database schema (simplified for YOUR convenience): member ------ id INT PK board ------ id INT PK officer ------ id INT PK If you're into OOP, Officer Inherits Board Inherits Member. In other words, if someone is listed on the officer table, s/he is listed on the board table and the member table. I want to find out the highest privilege level someone has. So far my SP looks like this: DELIMITER // CREATE PROCEDURE GetAuthLevel(IN targetID MEDIUMINT) BEGIN IF SELECT `id` FROM `member` WHERE `id` = targetID; THEN IF SELECT `id` FROM `board` WHERE `id` = targetID; THEN IF SELECT `id` FROM `officer` WHERE `id` = targetID; THEN RETURN 3; /*officer*/ ELSE RETURN 2; /*board member*/ ELSE RETURN 1; /*general member*/ ELSE RETURN 0; /*not a member*/ END // DELIMITER ; The exact text of the error is #1064 - You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'SELECT id FROM member WHERE id = targetID; THEN IF SEL' at line 4 I suspect the issue is in the arguments for the IF blocks. What I want to do is return true if the result-set is at least one -- i.e. the id was found in the table. Do any of you guys see anything to do here, or should I reconsider my database design into this:? person ------ id INT PK level SMALLINT

    Read the article

  • h2 (embedded mode ) database files problem

    - by aeter
    There is a h2-database file in my src directory (Java, Eclipse): h2test.db The problem: starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them; when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows. Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code). I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?

    Read the article

  • how to extract only the 1st 2 bytes of NSString in Objective-C for iPhone programming

    - by suse
    Hello, 1) How to read the data from read stream in Objective-C, Below code would give me how many bytes are read from stream, but how to know what data is read from stream? CFIndex cf = CFReadStreameRead(Stream, buffer, length); 2) How to extract only the 1st 2bytes of NSString in Objective-C in iPhone programming. For Ex: If this is the string NSString *str = 017MacApp; 1st byte has 0 in it, and 2nd byte has 17 in it. how do i extract o and 17 into byte array? I know that the below code would give me back the byte array to int value. ((b[0] & 0xFF) << 8)+ (b[1] & 0xFF); but how to put 0 into b[0] and 17 into b[1]? plz help me to solve this.

    Read the article

  • getline at a certain line for file IO

    - by BSchlinker
    Is there anyway to use getline to read a specific line within a file? For instance, to immediately read line #20? It seems inefficient to do any type of look to read and discard earlier lines. I know about fseek, but there is no guarantee that the records will be the same length on each line. I imagine this is simply what is required in order to find lines. After all, to know when the end of the line has been reached, it needs to find the break line character, so it makes sense for it to need to read each line. Just wondering if there was any quicker method.

    Read the article

  • Oracle T4CPreparedStatement memory leaks?

    - by Jay
    A little background on the application that I am gonna talk about in the next few lines: XYZ is a data masking workbench eclipse RCP application: You give it a source table column, and a target table column, it would apply a trasformation (encryption/shuffling/etc) and copy the row data from source table to target table. Now, when I mask n tables at a time, n threads are launched by this app. Here is the issue: I have run into a production issue on first roll out of the above said app. Unfortunately, I don't have any logs to get to the root. However, I tried to run this app in test region and do a stress test. When I collected .hprof files and ran 'em through an analyzer (yourKit), I noticed that objects of oracle.jdbc.driver.T4CPreparedStatement was retaining heap. The analysis also tells me that one of my classes is holding a reference to this preparedstatement object and thereby, n threads have n such objects. T4CPreparedStatement seemed to have character arrays: lastBoundChars and bindChars each of size char[300000]. So, I researched a bit (google!), obtained ojdbc6.jar and tried decompiling T4CPreparedStatement. I see that T4CPreparedStatement extends OraclePreparedStatement, which dynamically manages array size of lastBoundChars and bindChars. So, my questions here are: Have you ever run into an issue like this? Do you know the significance of lastBoundChars / bindChars? I am new to profiling, so do you think I am not doing it correct? (I also ran the hprofs through MAT - and this was the main identified issue - so, I don't really think I could be wrong?) I have found something similar on the web here: http://forums.oracle.com/forums/thread.jspa?messageID=2860681 Appreciate your suggestions / advice.

    Read the article

  • NSOutlineview - strange behavior after reloading data

    - by matei
    I have a NSOutlineView which loads data from a data source. The table is displayed in a panel. The items shown in the table are items which belong to an object (the relation is one-to many between the object and the items). I have a list of objects in a combo box (in fact a NSPopupButton), and when I select another object in the combo box, I want it's items to be shown in the table (the NSOutlineView). I managed to do all this , however when I select another object from the combo box, not all of it's items are displayed. I have put some logging messages in the data source and it seems that there are some items that are being returned from the data source , but are not shown in the table (and are not queried for children). Now the strange part is that when I click the main window (as I said, all that I described here is in a NSPanel loaded on top of the window) , all the data is displayed correctly. It seems as if clicking the main window triggers something in the NSOutlineView that makes it display the missing items, but I can't tell what it is.

    Read the article

  • asp.net Membership : Extending Role membership?

    - by mark smith
    Hi there, I am been taking a look at asp.net membership and it seems to provide everything that i need but i need some kind of custom Role functionality. Currently i can add user to a role, great. But i also need to be able to add Permissions to Roles.. i.e. Role: Editor Permissions: Can View Editor Menu, Can Write to Editors Table, Can Delete Entries in Editors Table. Currently it doesn't support this, The idea behind this is to create a admin option in my program to create a role and then assign permissions to a role to say "allow the user to view a certain part of the application", "allow the user to open a menu item" Any ideas how i would implement soemthing like this? I presume a custom ROLE provider but i was wondering if some kind of framework extension existed already without rolling my own? Or anybody knows a good tutorial of how to tackle this issue? I am quite happy with what asp.net SQL provider has created in terms of tables etc... but i think i need to extend this by adding another table called RolesPermissions and then I presume :-) adding some kind of enumeration into the table for each valid permission?? THanks in advance

    Read the article

  • Strange Recurrent Excessive I/O Wait

    - by Chris
    I know quite well that I/O wait has been discussed multiple times on this site, but all the other topics seem to cover constant I/O latency, while the I/O problem we need to solve on our server occurs at irregular (short) intervals, but is ever-present with massive spikes of up to 20k ms a-wait and service times of 2 seconds. The disk affected is /dev/sdb (Seagate Barracuda, for details see below). A typical iostat -x output would at times look like this, which is an extreme sample but by no means rare: iostat (Oct 6, 2013) tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 16.00 0.00 156.00 9.75 21.89 288.12 36.00 57.60 5.50 0.00 44.00 8.00 48.79 2194.18 181.82 100.00 2.00 0.00 16.00 8.00 46.49 3397.00 500.00 100.00 4.50 0.00 40.00 8.89 43.73 5581.78 222.22 100.00 14.50 0.00 148.00 10.21 13.76 5909.24 68.97 100.00 1.50 0.00 12.00 8.00 8.57 7150.67 666.67 100.00 0.50 0.00 4.00 8.00 6.31 10168.00 2000.00 100.00 2.00 0.00 16.00 8.00 5.27 11001.00 500.00 100.00 0.50 0.00 4.00 8.00 2.96 17080.00 2000.00 100.00 34.00 0.00 1324.00 9.88 1.32 137.84 4.45 59.60 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 22.00 44.00 204.00 11.27 0.01 0.27 0.27 0.60 Let me provide you with some more information regarding the hardware. It's a Dell 1950 III box with Debian as OS where uname -a reports the following: Linux xx 2.6.32-5-amd64 #1 SMP Fri Feb 15 15:39:52 UTC 2013 x86_64 GNU/Linux The machine is a dedicated server that hosts an online game without any databases or I/O heavy applications running. The core application consumes about 0.8 of the 8 GBytes RAM, and the average CPU load is relatively low. The game itself, however, reacts rather sensitive towards I/O latency and thus our players experience massive ingame lag, which we would like to address as soon as possible. iostat: avg-cpu: %user %nice %system %iowait %steal %idle 1.77 0.01 1.05 1.59 0.00 95.58 Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn sdb 13.16 25.42 135.12 504701011 2682640656 sda 1.52 0.74 20.63 14644533 409684488 Uptime is: 19:26:26 up 229 days, 17:26, 4 users, load average: 0.36, 0.37, 0.32 Harddisk controller: 01:00.0 RAID bus controller: LSI Logic / Symbios Logic MegaRAID SAS 1078 (rev 04) Harddisks: Array 1, RAID-1, 2x Seagate Cheetah 15K.5 73 GB SAS Array 2, RAID-1, 2x Seagate ST3500620SS Barracuda ES.2 500GB 16MB 7200RPM SAS Partition information from df: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sdb1 480191156 30715200 425083668 7% /home /dev/sda2 7692908 437436 6864692 6% / /dev/sda5 15377820 1398916 13197748 10% /usr /dev/sda6 39159724 19158340 18012140 52% /var Some more data samples generated with iostat -dx sdb 1 (Oct 11, 2013) Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sdb 0.00 15.00 0.00 70.00 0.00 656.00 9.37 4.50 1.83 4.80 33.60 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 12.00 836.00 500.00 100.00 sdb 0.00 0.00 0.00 3.00 0.00 32.00 10.67 9.96 1990.67 333.33 100.00 sdb 0.00 0.00 0.00 4.00 0.00 40.00 10.00 6.96 3075.00 250.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 4.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 2.00 0.00 16.00 8.00 2.62 4648.00 500.00 100.00 sdb 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.00 0.00 0.00 100.00 sdb 0.00 0.00 0.00 1.00 0.00 16.00 16.00 1.69 7024.00 1000.00 100.00 sdb 0.00 74.00 0.00 124.00 0.00 1584.00 12.77 1.09 67.94 6.94 86.00 Characteristic charts generated with rrdtool can be found here: iostat plot 1, 24 min interval: http://imageshack.us/photo/my-images/600/yqm3.png/ iostat plot 2, 120 min interval: http://imageshack.us/photo/my-images/407/griw.png/ As we have a rather large cache of 5.5 GBytes, we thought it might be a good idea to test if the I/O wait spikes would perhaps be caused by cache miss events. Therefore, we did a sync and then this to flush the cache and buffers: echo 3 > /proc/sys/vm/drop_caches and directly afterwards the I/O wait and service times virtually went through the roof, and everything on the machine felt like slow motion. During the next few hours the latency recovered and everything was as before - small to medium lags in short, unpredictable intervals. Now my question is: does anybody have any idea what might cause this annoying behaviour? Is it the first indication of the disk array or the raid controller dying, or something that can be easily mended by rebooting? (At the moment we're very reluctant to do this, however, because we're afraid that the disks might not come back up again.) Any help is greatly appreciated. Thanks in advance, Chris. Edited to add: we do see one or two processes go to 'D' state in top, one of which seems to be kjournald rather frequently. If I'm not mistaken, however, this does not indicate the processes causing the latency, but rather those affected by it - correct me if I'm wrong. Does the information about uninterruptibly sleeping processes help us in any way to address the problem? @Andy Shinn requested smartctl data, here it is: smartctl -a -d megaraid,2 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:13 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 20 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 1236631092 Blocks received from initiator = 1097862364 Blocks read from cache and sent to initiator = 1383620256 Number of read and write commands whose size <= segment size = 531295338 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36556.93 number of minutes until next internal SMART test = 32 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 509271032 47 0 509271079 509271079 20981.423 0 write: 0 0 0 0 0 5022.039 0 verify: 1870931090 196 0 1870931286 1870931286 100558.708 0 Non-medium error count: 0 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36538 - [- - -] # 2 Background short Completed 16 36514 - [- - -] # 3 Background short Completed 16 36490 - [- - -] # 4 Background short Completed 16 36466 - [- - -] # 5 Background short Completed 16 36442 - [- - -] # 6 Background long Completed 16 36420 - [- - -] # 7 Background short Completed 16 36394 - [- - -] # 8 Background short Completed 16 36370 - [- - -] # 9 Background long Completed 16 36364 - [- - -] #10 Background short Completed 16 36361 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes] smartctl -a -d megaraid,3 /dev/sdb yields: smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: SEAGATE ST3500620SS Version: MS05 Serial number: Device type: disk Transport protocol: SAS Local Time is: Mon Oct 14 20:37:26 2013 CEST Device supports SMART and is Enabled Temperature Warning Disabled or Not Supported SMART Health Status: OK Current Drive Temperature: 19 C Drive Trip Temperature: 68 C Elements in grown defect list: 0 Vendor (Seagate) cache information Blocks sent to initiator = 288745640 Blocks received from initiator = 1097848399 Blocks read from cache and sent to initiator = 1304149705 Number of read and write commands whose size <= segment size = 527414694 Number of read and write commands whose size > segment size = 51986460 Vendor (Seagate/Hitachi) factory information number of hours powered up = 36596.83 number of minutes until next internal SMART test = 28 Error counter log: Errors Corrected by Total Correction Gigabytes Total ECC rereads/ errors algorithm processed uncorrected fast | delayed rewrites corrected invocations [10^9 bytes] errors read: 610862490 44 0 610862534 610862534 20470.133 0 write: 0 0 0 0 0 5022.480 0 verify: 2861227413 203 0 2861227616 2861227616 100872.443 0 Non-medium error count: 1 SMART Self-test log Num Test Status segment LifeTime LBA_first_err [SK ASC ASQ] Description number (hours) # 1 Background short Completed 16 36580 - [- - -] # 2 Background short Completed 16 36556 - [- - -] # 3 Background short Completed 16 36532 - [- - -] # 4 Background short Completed 16 36508 - [- - -] # 5 Background short Completed 16 36484 - [- - -] # 6 Background long Completed 16 36462 - [- - -] # 7 Background short Completed 16 36436 - [- - -] # 8 Background short Completed 16 36412 - [- - -] # 9 Background long Completed 16 36404 - [- - -] #10 Background short Completed 16 36401 - [- - -] #11 Background long Completed 16 2 - [- - -] #12 Background short Completed 16 0 - [- - -] Long (extended) Self Test duration: 6798 seconds [113.3 minutes]

    Read the article

  • How to create Large resumable download from a secured location .NET

    - by Kelvin H
    I need to preface I'm not a .NET coder at all, but to get partial functionality, I modified a technet chunkedfilefetch.aspx script that uses chunked Data Reading and writing Streamed method of doing file transfer, to get me half-way. iStream = New System.IO.FileStream(path, System.IO.FileMode.Open, _ IO.FileAccess.Read, IO.FileShare.Read) dataToRead = iStream.Length Response.ContentType = "application/octet-stream" Response.AddHeader("Content-Length", file.Length.ToString()) Response.AddHeader("Content-Disposition", "attachment; filename=" & filedownload) ' Read and send the file 16,000 bytes at a time. ' While dataToRead 0 If Response.IsClientConnected Then length = iStream.Read(buffer, 0, 16000) Response.OutputStream.Write(buffer, 0, length) Response.Flush() ReDim buffer(16000) ' Clear the buffer ' dataToRead = dataToRead - length Else ' Prevent infinite loop if user disconnects ' dataToRead = -1 End If End While This works great on files up to 2GB and is fully functioning now.. But only one problem it doesn't allow for resume. I took the original code called it fetch.aspx and pass an orderNUM through the URL. fetch.aspx&ordernum=xxxxxxx It then reads the filename/location from the database occording to the ordernumber, and chunks it out from a secured location NOT under the webroot. I need a way to make this resumable, by the nature of the internet and large files people always get disconnected and would like to resume where they left off. But any resumable articles i've read, assume the file is within the webroot.. ie. http://www.devx.com/dotnet/Article/22533/1954 Great article and works well, but I need to stream from a secured location. I'm not a .NET coder at all, at best i can do a bit of coldfusion, if anyone could help me modify a handler to do this, i would really appreciate it. Requirements: I Have a working fetch.aspx script that functions well and uses the above code snippet as a base for the streamed downloading. Download files are large 600MB and are stored in a secured location outside of the webroot. Users click on the fetch.aspx to start the download, and would therefore be clicking it again if it was to fail. If the ext is a .ASPX and the file being sent is a AVI, clicking on it would completely bypass an IHTTP handler mapped to .AVI ext, so this confuses me From what I understand the browser will read and match etag value and file modified date to determine they are talking about the same file, then a subsequent accept-range is exchanged between the browser and IIS. Since this dialog happens with IIS, we need to use a handler to intercept and respond accordingly, but clicking on the link would send it to an ASPX file which the handeler needs to be on an AVI fiel.. Also confusing me. If there is a way to request the initial HTTP request header containing etag, accept-range into the normal .ASPX file, i could read those values and if the accept-range and etag exist, start chunking at that byte value somehow? but I couldn't find a way to transfer the http request headers since they seem to get lost at the IIS level. OrderNum which is passed in the URL string is unique and could be used as the ETag Response.AddHeader("ETag", request("ordernum")) Files need to be resumable and chunked out due to size. File extensions are .AVI so a handler could be written around it. IIS 6.0 Web Server Any help would really be appreciated, i've been reading and reading and downloading code, but none of the examples given meet my situation with the original file being streamed from outside of the webroot. Please help me get a handle on these httphandlers :)

    Read the article

  • Edit Html.ActionLink output string

    - by Aaron Salazar
    I'm trying to output the following HTML using Html.ActionLink: <a href="/About" class="read-more">Read More<span class="arrow">?</span></a> I'm getting it done by doing an ActionLink, which outputs an tag and then manipulating the string. <%= Html.ActionLink("[[replace]]", "Index", "About", null, new { @class = "read-more" }).ToHtmlString().Replace("[[replace]]", "Read More" + "<span class='arrow'>?</span>")%></p> It'd be good if I could put HTML directly into the ActionLink but there doesn't seem to be a way based on my internet searches. Sure, it works but it seems like a hack. Is there a better way to accomplish this?

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >