Search Results

Search found 23901 results on 957 pages for 'mysql stored procedure'.

Page 313/957 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Problem with Ruby script output being stored into a file

    - by nickf
    I have a Ruby script that outputs a heap of text. As an example: puts "line 1" puts "line 2" puts "line 3" # etc... (obviously, this isn't how my script works..) There's not a lot of data - perhaps about 8kb of character data in total. When I run the script on the command line, it works as expected: $ ./my-script.rb line 1 line 2 line 3 But, when I push it into a file, the output is truncated at exactly 4096 bytes: $ ./my-script.rb > output.txt What would cause it to stop at 4kb?

    Read the article

  • nginx-tornado-django request timeout

    - by Xie
    We are using nginx-tornado-django to provide web services. That is, no web page frontend. The nginx server serves as a load-balancer. The server has 8 cores, so we launched 8 tornado-django processes on every server. Memcached is also deployed to gain better performance. The requests per day is about 1 million per server. We use MySQL as backend DB. The code is tested and correct. Our profiling shows that normally every request are processed within 100ms. The problem is, we find that about 10 percent of the requests suffers from time-out issue. Many requests didn't even reach tornado. I really don't have much experience on tuning of nginx/tornado/MySQL. Right now I don't have a clue on what is going wrong. Any advise is appreiciated.

    Read the article

  • PL/SQL Sum by hour

    - by Steve
    Hi, I have some data with start and stop date that I need to sum. I am not sure how to code for it. Here are is the data I have to use: STARTTIME,STOPTIME,EVENTCAPACITY 8/12/2009 1:15:00 PM,8/12/2009 1:59:59 PM,100 8/12/2009 2:00:00 PM,8/12/2009 2:29:59 PM,100 8/12/2009 2:30:00 PM,8/12/2009 2:59:59 PM,80 8/12/2009 3:00:00 PM,8/12/2009 3:59:59 PM,85 In this example I would need the sum from 1pm to 2pm, 2pm to 3pm and 3pm to 4pm Any suggestions are appreciated. Steve

    Read the article

  • SQL Table Setup Advice

    - by Ozzy
    Hi all. Basically I have an xml feed from an offsite server. The xml feed has one parameter ?value=n now N can only be between 1 and 30 What ever value i pick, there will always be 4000 rows returned from the XML file. My script will call this xml file 30 times for each value once a day. So thats 120000 rows. I will be doing quite complicated queries on these rows. But the main thing is I will always filter by value first so SELECT * WHERE value = 'N' etc. That will ALWAYS be used. Now is it better to have one table where all 120k rows are stored? or 30 tables were 4k rows are stored? EDIT: the SQL database in question will be MySQL

    Read the article

  • Excel string manipulation to check data consistency

    - by chefsmart
    Background information: - There are nearly 7000 individuals and there is data about their performances in one, two or three tests. Every individual has taken the 1st test (let's call it Test M). Some of those who have taken Test M have also taken Test I, and some of those who have taken Test I have also taken Test B. For the first two tests (M and I), students can score grades I, II, or III. Depending on the grades they are awarded points -- 3 for grade I, 2 for II, 1 for III. The last Test B is just a pass or a fail result with no grades. Those passing this test get 1 point, with no points for failure. (Well actually, grades are awarded, but all grades are given a common 1 point). An amateur has entered data to represent these students and their grades in an Excel file. Problem is, this person has done the worst thing possible - he has developed his own notation and entered all test information in a single cell --- and made my life hell. The file originally had two text columns, one for individual's id, and the second for test info, if one could call it that. It's horrible, I know, and I am suffering. In the image, if you see "M-II-2 I-III-1" it means the person got grade II in Test M for 2 points and grade III in Test I for 1 point. Some have taken only one test, some two, and some three. When the file came to me for processing and analyzing the performance of students, I sent it back with instructions to insert 3 additional columns with only the grades for the three tests. The file now looks as follows. Columns C and D represent grades I, II, and III using 1,2 and 3 respectively. Column C is for Test M, column D for Test I. Column E says BA (B Achieved!) if the individual has passed Test B. Now that you have the above information, let's get to the problem. I don't trust this and want to check whether data in column B matches with data in columns C,D and E. That is, I want to examine the string in column B and find out whether the figures in columns C,D and E are correct. All help is really appreciated. P.S. - I had exported this to MySQL via ODBC and that is why you are seeing those NULLs. I tried doing this in MySQL too, and really will accept a MySQL or an Excel solution, I don't have a preference. Edit : - See file with sample data

    Read the article

  • Synchronising local and remote DB

    - by nico
    Hi everyone, I have a general question about DB synchronisation. So, I'm developing a website locally (PHP + MySQL) and I would like to be able to synchronise at least the structure (and maybe the contents) of the two DB when one of the two is changed (normally I would change the local copy). Right now what I'm doing is to use mysqldump to dump the modified tables and then import them in the remote DB or do it by hand if the changes are minimal. However I find this tedious and error-prone. For the PHP I'm currently using Quanta+ which has the handy feature of finding files that have changed and just upload those. Is there something similar for MySQL? Otherwise how do you keep your DBs synchronised? Thanks nico PS: I'm sorry if this was already asked, I saw other questions that deal with similar topics, but couldn't really find an answer.

    Read the article

  • Delphi Unicode String Type Stored Directly at its Address (or "Unicode ShortString")

    - by Andreas Rejbrand
    I want a string type that is Unicode and that stores the string directly at the adress of the variable, as is the case of the (Ansi-only) ShortString type. I mean, if I declare a S: ShortString and let S := 'My String', then, at @S, I will find the length of the string (as one byte, so the string cannot contain more than 255 characters) followed by the ANSI-encoded string itself. What I would like is a Unicode variant of this. That is, I want a string type such that, at @S, I will find a unsigned 32-bit integer (or a single byte would be enough, actually) containing the length of the string in bytes (or in characters, which is half the number of bytes) followed by the Unicode representation of the string. I have tried WideString, UnicodeString, and RawByteString, but they all appear only to store an adress at @S, and the actual string somewhere else (I guess this has do do with reference counting and such). Update: The most important reason for this is probably that it would be very problematic if sizeof(string) were variable. I suspect that there is no built-in type to use, and that I have to come up with my own way of storing text the way I want (which actually is fun). Am I right? Update I will, among other things, need to use these strings in packed records. I also need manually to read/write these strings to files/the heap. I could live with fixed-size strings, such as <= 128 characters, and I could redesign the problem so it will work with null-terminated strings. But PChar will not work, for sizeof(PChar) = 1 - it's merely an address. The approach I eventually settled for was to use a static array of bytes. I will post my implementation as a solution later today.

    Read the article

  • Shopping Cart Database Structure

    - by Paul Atkins
    Hi, I have been studying the database structure for shopping carts and notice that when storing order details the product information is repeated and stored again in the table. I was wondering what the reasoning behind this would be? Here is a small example of what i mean: Product Table product_id name desc price 1 product 1 This is product 1 27.00 Order Table order_id customer id order_total 1 3 34.99 Order Details Table order_details_id product_id product name price qty 1 1 product 1 27.00 1 So as you can see the product name and price are stored again in the order details table. Why is this? The only reason i can think of is because the product details may change after the order has been placed which may cause confusion. Is this correct? Thanks Paul

    Read the article

  • Shell script to import mysql dump file.

    - by Chandu
    Hi all, I'm new to mysql. My requirement is to import a sql dump into mysql using shell script for linux and this script should be called by java program for the restoration to take place automatically. Please advice me on this. Regards, Chandu.

    Read the article

  • How to store or share live data between PHP Requests?

    - by Devyn
    Hi, I want to start a project for facebook and the application will be like real-time multiplayer chess game. The problem I'm having is I have no idea how to store the data when a player moves one piece and update the new position in player2 browser. I'm gonna use PHP, MySQL for server side and jQuery for Client Rendering. The simplest idea is to store the data in XML or MySQL and re-generate the result to player2 browser. But I know that when thousand of players are playing, it will not be an efficient way. Since I don't have time to study new language for this project, I'm gonna have to stick with PHP. I'm not going to use flash either because I want my client side light-weight and flash-free. So is there any way that will solve my problems?

    Read the article

  • PL/SQL profiler missing data

    - by user289429
    We are using pl/sql profiler to collect metrics. We noticed that on one of the environment the plsql_profiler_runs table is populated with the total execution time but the finer details that gets collected in the table plsql_profiler_data is missing. Any idea why this would be happening? We do use dbms_profiler.flush_data() before stopping the profiler and have seen this work fine in another environment.

    Read the article

  • FTS: Searching across multiple fields 'intelligently'

    - by Wild Thing
    Hi, I have a SP using FTS (Full Text Search). I want searches across multiple fields, 'intelligently' ranking results based on the weights I assign. Consider a search on a view fetching data from tables: Book, Author and Genre. Now, I want the searcher to be able to do: "Ludlum Fiction", "Robert Ludlum Bourne", "Bourne Ludlum", etc. Unfortunately, the only way I have been able to do that at present is this: http://pastebin.com/fdce11ff This is pretty bad, because I am manually breaking up the search string. I know I am doing this completely the wrong way, but can't figure out the right way to search across multiple fields in FTS. Can somebody help please?

    Read the article

  • Why Does Piping Binary Text to the Screen often Horck a Terminal

    - by Alan Storm
    Imaginary Situation: You’ve used mysqldump to create a backup of a mysql database. This database has columns that are blobs. That means your “text” dump files contains both strings and binary data (binary data stored as strings?) If you cat this file to the screen $ cat dump.mysql you’ll often get unexpected results. The terminal will start beeping, and then the output finishes scrolling by you’ll often have garbage chacters entered on your terminal as through you’d typed them, and sometimes your prompts and anything you type will be garbage characters. Why does this happen? Put another way, I think I’m looking for an overview of what’s actually happening when you store binary strings into a file, and when you cat those files, and when the results of the cat are reported to the terminal, and any other steps I’m missing.

    Read the article

  • Addresses stored in SQL server have many small variations(errors)

    - by MAW74656
    I have a table in my database which stores packing slips and their information. I'm trying to query that table and get each unique address. I've come close, but I still have many near misses and I'm looking for a way to exclude these near duplicates from my select. Sample Data CompanyCode CompanyName Addr1 City State Zip 10033 UNITED DIE CUTTING & FINISHIN 3610 HAMILTON AVE CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE. CLEVELAND Ohio 44114 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVENUE CLEVELAND Ohio 44114 10033 UNITED DIECUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44144 10033 UNITED FINISHING 3610 HAMILTON AVE CLEVLAND Ohio 44114 10033 UNITED FINISHING & DIE CUTTING 3610 HAMILTON AVE CLEVELAND Ohio 44114 And all I want is 1 record. Is there some way I can get the "Average" record? Meaning, if most of the records say CLEVELAND instead of CLEVLAND, I want my 1 record to say CLEVELAND. Is there any way to par this data down to what I'm looking for? Desired Output CompanyCode CompanyName Addr1 City State Zip 10033 UNITED DIE CUTTING & FINISHING 3610 HAMILTON AVE CLEVELAND Ohio 44114

    Read the article

  • What's your release process for your commercial application?

    - by dr. evil
    If you are developing a commercial desktop application, what's your release process? Sample process: Develop it: Patch bugs, add features, etc. Feature Freeze (do not fix, add anything unless it's absolutely required) Test it If everything is OK release it, if it's not fix it, test it, release it I think the most crucial question is what's your approach to "feature freeze test release" cycle? Or do you test it more frequently that you don't need such a cycle and your software is always ready for public release?

    Read the article

  • Will MyISAM type tables work better than InnoDB for large numbers of columns?

    - by Ethan
    I have a MySQL InnoDB table with 238 columns. 56 of them are TEXT type, 27 are VARCHAR(255). I am getting MySQL error 139 when users insert data sometimes. After research I found that I'm probably running into InnoDB row size/column size/column count limitations. (I'm putting it that way because the specific limits among those three things are interdependent.) Docs on InnoDB give an idea of the limits. If I switch this table to MyISAM is it likely to solve the problem? I understand the maximum row size of 65,535 bytes. I think I'm hitting InnoDB's additional 8000 byte limit somehow. Switching to PostgreSQL is also a remote option, but would take much longer.

    Read the article

  • Problem with datagridvieew in vb.net

    - by user225269
    I'm trying to add a datagridview in vb.net, but it does not allow me to change the connection string or the database that should be imported to connect to it. The only thing that I'm seeing is the previous ms sql database that I connected with datagridview and everytime I click the new connection, the window closes and it leaves me with the datagridview with the previous connection that I have. And its not applicable because, now I want to connect it with mysql. Not ms sql. Its some sort of a cache like feature in vb.net, how do I get rid of it. so that I can add the new connection for mysql? Do I need to reinstall visual studio 2008?

    Read the article

  • Oracle: Use of notational parameters which calling functions in insert statements not allowed ?

    - by Sathya
    Why does Oracle 10 R2 not allow use of notational parameters while calling functions in insert statements ? In my app, I'm calling a function in an insert statement. If use notational method of parameter passing, I get an ORA-00907: Missing right parenthesis error message INSERT INTO foo (a, b, c) VALUES (c, F1(P1=>'1', P2=>'2', P3 => '3'), e) Changing the same to position based parameter passing, and the same code gets compiled with no errors. INSERT INTO foo (a, b, c) VALUES (c, F1('1','2','3'), e) Why is this so ?

    Read the article

  • Send a "304 Not Modified" for images stored in the datastore

    - by Emilien
    I store user-uploaded images in the Google App Engine datastore as db.Blob, as proposed in the docs. I then serve those images on /images/<id>.jpg. The server always sends a 200 OK response, which means that the browser has to download the same image multiple time (== slower) and that the server has to send the same image multiple times (== more expensive). As most of those images will likely never change, I'd like to be able to send a 304 Not Modified response. I am thinking about calculating some kind of hash of the picture when the user uploads it, and then use this to know if the user already has this image (maybe send the hash as an Etag?) I have found this answer and this answer that explain the logic pretty well, but I have 2 questions: Is it possible to send an Etag in Google App Engine? Has anyone implemented such logic, and/or is there any code snippet available?

    Read the article

  • MSSQL STOREDPROC SELECTING FROM A FIELD FROM 1 TABLE USING LIKE TO CREATE MORE THAN 1 COLUM IN A DAT

    - by djshortbus
    I have a ASPX.NET DataGrid and im trying to USE a select LIKE 'X'% from a table that has 1 field called location. im trying to display the locations that start with a certain letter (example wxxx,axxx,fxxx,) in different columns in my data grid. SELECT DISTINCT LM.LOCATION AS '0 LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE '0%' SELECT DISTINCT LM.LOCATION AS 'A LOCATIONS' , LM.COUNTLEVEL AS 'COUNTLEVEL' FROM SOH S WITH(NOLOCK) JOIN LOCATIONMASTER LM ON LM.LMID = S.LMID WHERE LM.COUNTLEVEL = 1 AND LM.LOCATION NOT IN ('RECOU','PROBLEM','TOSTOCK','PYXVLOC') AND LM.LOCATION LIKE 'A%'

    Read the article

  • measuring performance - using real clicks vs "ab" command

    - by shanyu
    I have a web site in closed beta, developed in Django, runs with Mysql on Debian. In the last few days, the main page has been showing a slowdown. For every ten clicks, one or two receives extremely slow response (10 secs or more), others are as fast as they used to be. When I was searching for the problem, I ran into this issue that I couldn't grasp: top command shows that when I request the main page, mysql shoots up to 90% - 100% cpu usage. I get the page just as the cpu use gets back to normal. So, I thought, it is db. Then I called ab with parameters -n 1000 -c 5, I got decent performance, about 100 pages per second, just as it was before the slowdown. I would imagine a worse performance as 10-20% of requests take 10 secs to load. Is this conflict between ab and "real" clicks normal, or am I using ab in a wrong configuration?

    Read the article

  • calling a stored postgres function from php

    - by KittyYoung
    Just a little confused here... I have a function in postgres, and when I'm at the pg prompt, I just do: SELECT zp('zc',10,20,90); FETCH ALL FROM zc; I'm wondering how to do this from php? I thought I could just do: $q = pg_query("SELECT zp('zc',10,20,90)"); But, how do I "fetch" from that query?

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >