Search Results

Search found 42428 results on 1698 pages for 'database query'.

Page 316/1698 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Rails: How do I unserialize from database?

    - by Macint
    Hello, I am currently trying to save information for an invoice/bill. On the invoice I want to show what the total price is made up of. The procedures & items, their price and the qty. So in the end I hope to get it to look like this: Consult [date] [total_price] Procedure_name [price] [qty] Procedure_name [price] [qty] Consult [date] [total_price] Procedure_name [price] [qty] etc... All this information is available through the database but i want to save the information as a separate copy. That way if the user changes the price of some procedures the invoice information is still correct. I thought i'd do this by serializing and save the data to a column (consult_data) in the Invoice table. My Model: class Invoice < ActiveRecord::Base ...stuff... serialize :consult_data ... end This is what I get from the form (1 consult and 3 procedures): {"commit"=>"Save draft", "authenticity_token"=>"MZ1OiOCtj/BOu73eVVkolZBWoN8Fy1skHqKgih7Sbzw=", "id"=>"113", "consults"=>[{"consult_date"=>"2010-02-20", "consult_problem"=>"ABC", "procedures"=>[{"name"=>"asdasdasd", "price"=>"2.0", "qty"=>"1"}, {"name"=>"AAAnd another one", "price"=>"45.0", "qty"=>"4"}, {"name"=>"asdasdasd", "price"=>"2.0", "qty"=>"1"}], "consult_id"=>"1"}]} My save action: def add_to_invoice @invoice = @current_practice.invoices.find_by_id(params[:id]) @invoice.consult_data=params[:consults] if @invoice.save render :text => "I think it worked" else render :text => "I don't think it worked'" end end It does save to the database and if I look at the entry in the console I can see that it is all there: consult_data: "--- \n- !map:HashWithIndifferentAccess \n consult_da..." (---The question---) But I can't seam to get back my data. I tried defining a variable to the consult_data attribute and then doing "variable.consult_problem" or "variable[:consult_problem]" (also tried looping) but it only throws no-method-errors back at me. How do I unserialize the data from the database and turn it back into hash that i can use? Thank you very much for any help!

    Read the article

  • how create a sql database fom a stongly typed dataset

    - by Keith Vinson
    I'm looking for an easy way to transfer a database schema I have developed inside visual studio as a strongly typed dataset (xsd file) into a corresponding sql server database. Silly me I assumed the process would be forthright, but I can't find out how to do it. I assume I could duplicate the tables column by column, but that seems so error prone. Does anyone know of a way to perform the schema transfer like this? Maybe a tool to translate the xsd file into a corresponding sql server ddl file? Final thought once I have the schema transferred moving data around between the two data stores will be straight forward, its just getting the schemas synced that has me stumped... Thanks, Keith

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • Replicating MySQL DB to development machine - bad idea?

    - by Joel
    I am considering replicating a production MySQL database to my development machine so I've always got current data. The production database is externally hosted. My development machine is behind an unreliable internet connection. It is entirely possible that the development machine could be disconnected from the internet for extended periods of time (hours). Would there be any adverse effect on the production database by doing this? (I don't strictly need live data - but it would be nice, and good excuse to dabble with replication. If the consensus is that this is a bad idea, I'll set up a daily job to import the previous night's backup into my development database)

    Read the article

  • Connection Strings between Web Application and SQL Server

    - by Raven Dreamer
    Greetings. I'm writing a web application that is supposed to connect to a SQL Server database; the connection is formed from the following database string: <add key="DatabaseConnectionString" value="server=DEVPC1\SQLEXPRESS;uid=USERID;pwd=PASSWORD;database=DATABASE"/> However, whenever I try and run the web application, I get a connection error, specifically: An error occurred attempting this login: Login failed for user 'USERID'. Any suggestions on how to go about debugging this? I'm not really familiar with SQL, so any suggestions would be greatly appreciated.

    Read the article

  • Access denied when trying to access my database.

    - by Sergio Tapia
    Here's my code: <html> <head> </head> <body> <?php $user = mysql_real_escape_string($_GET["u"]); $pass = mysql_real_escape_string($_GET["p"]); $query = "SELECT * FROM usario WHERE username = '$user' AND password = '$pass'"; mysql_connect(localhost, "root", ""); @mysql_select_db("multas") or die( "Unable to select database"); $result=mysql_query($query); if(mysql_numrows($result) > 0){ echo 'si'; } ?> </body> </html> And here's the error I get when I try to run it Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user 'ODBC'@'localhost' (using password: NO) in C:\xampp\htdocs\useraccess.php on line 7 Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: A link to the server could not be established in C:\xampp\htdocs\useraccess.php on line 7 Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: Access denied for user 'ODBC'@'localhost' (using password: NO) in C:\xampp\htdocs\useraccess.php on line 8 Warning: mysql_real_escape_string() [function.mysql-real-escape-string]: A link to the server could not be established in C:\xampp\htdocs\useraccess.php on line 8 Warning: mysql_numrows() expects parameter 1 to be resource, boolean given in C:\xampp\htdocs\useraccess.php on line 16

    Read the article

  • How does the data storage work? [closed]

    - by Andres Adhi
    I am really new to the whole concept of Data storage, Domain, Server and everything else related to this. Can someone pleases explain what a Domain is? How are server part of the Domain and How are Database stored in the Server or Domain? How does a new server be able to connect to existing database server to get all the data needed. I tried to find this information in the web but I am not really finding a good resource. It may be because these is really basic information. I will really appreciate if someone can explain these concept in plain terms. Thanks in advance.

    Read the article

  • Changing the character encoding of a MySQL database

    - by Julien Genestoux
    Our whole application is now able to handle UTF-8 and it will be our choice in terms of encoding all across our architecture. The last step is to change the encoding of our MySQL databases. Of course, ALTER TABLE db_table CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; should be able to convert each of the tables to the right UTF8 encoding, yet, is there anything else I should do? I believe that the my.cnf configuration file needs to be changed as well.

    Read the article

  • Must all Concurrent Data Store (CDB) locks be explicitly released when closing a Berkeley DB?

    - by Steve Emmerson
    I have an application that comprises multiple processes each accessing a single Berkeley DB Concurrent Data Store (CDB) database. Each process is single-threaded and does no explicit locking of the database. When each process terminates normally, it calls DB-close() and DB_ENV-close(). When all processes have terminated, there should be no locks on the database. Episodically, however, the database behaves as if some process was holding a write-lock on it even though all processes have terminated normally. Does each process need to explicitly release all locks before calling DB_ENV-close()? If so, how does the process obtain the "locker" parameter for the call to DB_ENV-loc_vec()?

    Read the article

  • Database table copying

    - by vbNewbie
    I am trying to rectify a previous database creation with tables that contains data that needs to be saved. Instead of recreating a completely new database since some of the tables are still reusable, I need to split a table that exists into 2 new tables which I have done. Now I am trying to insert the data into the 2 new tables and because of duplicate data in the old table I am having a hard time doing this. Old table structure: ClientProjects clientId PK clientName clientProj hashkey MD5 (clientname and clientProj) new table structures: client clientId PK clientName projects queryId PK clientId PK projectName I hope this makes sense. The problem is that in the old table for example you have clients with multiple clientIds.

    Read the article

  • Django database caching

    - by hekevintran
    I have a Django form that uses an integer field to lookup a model object by its primary key. The form has a save() method that uses the model object referred to by the integer field. The model's manager's get() method is called twice, once in the clean method and once in the save() method: class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is the first call to get MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): id_a = self.cleaned_data['id_a'] # here is the second call to get my_model_object = MyModel.objects.get(id=id_a) # do other stuff I wasn't sure whether this hits the database two times or one time so I returned the object itself in the clean method so that I could avoid a second get() call. Does calling get() hit the database two times? Or is the object cached in the thread? class MyForm(forms.Form): id_a = fields.IntegerField() def clean_id_a(user_id): id_a = self.cleaned_data['id_a'] try: # here is my workaround return MyModel.objects.get(id=id_a) except User.DoesNotExist: raise ValidationError('Object does not exist') def save(self): # looking up the cleaned value returns the model object my_model_object = self.cleaned_data['id_a'] # do other stuff

    Read the article

  • error opening sql 2005 database in VS 2008

    - by Ken
    When I created my database using sql server 2005, i was able to connect and view it in Visual Studio 2008. I then detached the database onto my flash drive. Brought it home to work in VS 2008 - that worked. finally when i detached it from home and brought back to work, it will not open. It is saying that this version of sql server is not compatible with another version. I forget the exact wording of the error, as it was lengthy. any help you guys and provide would be very helpful! Thank you in advance!

    Read the article

  • Searching Database by Arbitrary Date in PHP

    - by jverdi
    Suppose you have a messaging system built in PHP with a MySQL database backend, and you would like to support searching for messages using arbitrary date strings. The database includes a messages table, with a 'date_created' field represented as a datetime. Examples of the arbitrary date strings that would be accepted by the user should mirror those accepted by strtotime. For the following examples, searches performed on March 21, 2010: "January 26, 2009" would return all messages between 2009-01-26 00:00:00 and 2009-01-27 00:00:00 "March 8" would return all messages between 2010-03-08 00:00:00 and 2010-01-26 00:00:00 "Last week" would return all messages between 2010-03-14 00:00:00 and 2010-03-21 018:25:00 "2008" would return all messages between 2008-01-01 00:00:00 and 2008-12-31 00:00:00 I began working with date_parse, but the number of variables grew quickly. I wonder if I am re-inventing the wheel. Does anyone have a suggestion that would work either as a general solution or one that would capture most of the possible input strings?

    Read the article

  • Carrierwave upload to a tmp dir before saving to database

    - by user827570
    I'm trying to build a visual editor where users can click an image they are presented with an image upload form once the upload is done I use ajax to return the image and insert it back into the page. But the above method inserts the image straight into the database but I want users to be able to visualize the image before the image is inserted into the database. So I was wondering if the image using carrierwave could be uploaded to a temp location, sent back to the user and then when the user saves the page the image is moved into the permanent location. Here's what I have so far. def edit_image @page = Page.find(1) @page.update_attributes(params[:page]) @page.save return :text => @page.file end But this is what I want to achieve def temp_image #uploads received image to a temp location #returns image to the user end And once the user clicks save def save #moves the file in the temp folder to the permanent location end Cheers

    Read the article

  • Password reset by email without a database table

    - by jpatokal
    The normal flow for resetting a user's password by mail is this: Generate a random string and store it in a database table Email string to user User clicks on link containing string String is validated against database; if it matches, user's pw is reset However, maintaining a table and expiring old strings etc seems like a bit of an unnecessary hassle. Are there any obvious flaws in this alternative approach? Generate a MD5 hash of the user's existing password Email hash string to user User clicks on link containing string String is validated by hashing existing pw again; if it matches, user's pw is reset Note that the user's password is already stored in a hashed and salted form, and I'm just hashing it once more to get a unique but repeatable string. And yes, there is one obvious "flaw": the reset link thus generated will not expire until the user changes their password (clicks the link). I don't really see why this would be a problem though -- if the mailbox is compromised, the user is screwed anyway.

    Read the article

  • how to read the txt file from the database(line by line)

    - by Ranjana
    i have stored the txt file to sql server database . i need to read the txt file line by line to get the content in it. my code : DataTable dtDeleteFolderFile = new DataTable(); dtDeleteFolderFile = objutility.GetData("GetTxtFileonFileName", new object[] { ddlSelectFile.SelectedItem.Text }).Tables[0]; foreach (DataRow dr in dtDeleteFolderFile.Rows) { name = dr["FileName"].ToString(); records = Convert.ToInt32(dr["NoOfRecords"].ToString()); bytes = (Byte[])dr["Data"]; } FileStream readfile = new FileStream(Server.MapPath("txtfiles/" + name), FileMode.Open); StreamReader streamreader = new StreamReader(readfile); string line = ""; line = streamreader.ReadLine(); but here i have used the FileStream to read from the Particular path. but i have saved the txt file in byt format into my Database. how to read the txt file using the byte[] value to get the txt file content, instead of using the Path value.

    Read the article

  • Excel and SQL, order by help

    - by perlnoob
    Im stuck in Excel 2007, running a query, it worked until I wanted to add a 2nd row containing "field 2". Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Chicago') Union all Select "Site Updates"."Posted By", "Site Uploaded"."Site Upload Date" From site_info.dbo."Site Updates" Where ("Site Updates"."Posted By") AND "Site Uploaded"."Site Upload Date">={ts '2010-05-01 00:00:00'}), ("Site Location"='Denver') Order By "Site Location" ASC; Basically I want 2 different cells for the locations, example name - Chicago - denver user1 - 100 - 20 user2 - 34 - 1002 Right now for some odd reason, its combining it like: name - chicago user1 - 120 user2 - 1036 Please note updating to 2010 beta is not a viable option for me at this point. Any and all input that will help me is greatly apprecaited. I have read over http://www.techonthenet.com/sql/order_by.php however its not gotten me very far in this question. If you have another SQL resource you recomend for people trying to get their feet wet, I'd greatly apprecaite it. If it helps all the info is on the same table.

    Read the article

  • I need a program to store the database script for oracle

    - by Hakan Kara
    We are developing a project that has 3 enviroments (development, test, production) So there are 3 databases (actually more than 3, because we have 5 customers so we have more than 10 databases) and they must be synchronised. There are 30 coders working for this project. Everone adds, deletes, and changes procedures, table columns etc. We need a program to store our database scripts like visual studio's team foundation server. See the change history of script file. Everyone must access that program and be able to put their scripts. Recover previous versions of script file. Execute these scripts over a selected database. Compare databases by procedures (not only by name, by content of procedure), functions, table columns, packages etc. I am searching a program like that. Which one do you suggest me?

    Read the article

  • Best way to save complex Python data structures across program sessions (pickle, json, xml, database

    - by Malcolm
    Looking for advice on the best technique for saving complex Python data structures across program sessions. Here's a list of techniques I've come up with so far: pickle/cpickle json jsonpickle xml database (like SQLite) Pickle is the easiest and fastest technique, but my understanding is that there is no guarantee that pickle output will work across various versions of Python 2.x/3.x or across 32 and 64 bit implementations of Python. Json only works for simple data structures. Jsonpickle seems to correct this AND seems to be written to work across different versions of Python. Serializing to XML or to a database is possible, but represents extra effort since we would have to do the serialization ourselves manually. Thank you, Malcolm

    Read the article

  • Intermittent "No Database Selected" in PHP/MySQL?

    - by ANE
    Have a PHP/MySQL form with a dropdown box containing a list of 350 names. When any random name is selected, sometimes it works & displays info about that name from the database, and sometimes the form gives the error "No Database Selected". Here's what I've tried, pretty much grasping at straws as I'm not a programmer: Increasing max_connections in /etc/my.cnf from 200 to 2000 (even though only 4-5 connections are made and it's a lightly used server) Changing mysql_pconnect to mysql_connect Adding the word true to this connection string: $mysql = mysql_pconnect($hostname_mysql, $username_mysql, $password_mysql, true) or trigger_error(mysql_error(),E_USER_ERROR); Changing the word require_once to require on this line: [?php require('/home/user/Connections/mysql.php'); ?] Enabling MySQL & PHP query & error logging. (no errors logged) Here is the code: [removed old bad code] Update: Working answer from Rob Apodaca below.

    Read the article

  • Would this method work to scale out SQL queries?

    - by David
    I have a database containing a single huge table. At the moment a query can take anything from 10 to 20 minutes and I need that to go down to 10 seconds. I have spent months trying different products like GridSQL. GridSQL works fine, but is using its own parser which does not have all the needed features. I have also optimized my database in various ways without getting the speedup I need. I have a theory on how one could scale out queries, meaning that I utilize several nodes to run a single query in parallel. The idea is to take an incoming SQL query and simply run it exactly like it is on all the nodes. When the results are returned to a coordinator node, the same query is run on the union of the resultsets. I realize that an aggregate function like average need to be rewritten into a count and sum to the nodes and that the coordinator divides the sum of the sums with the sum of the counts to get the average. What kinds of problems could not easily be solved using this model. I believe one issue would be the count distinct function. Edit: I am getting so many nice suggestions, but none have addressed the method.

    Read the article

  • Capistrano 3, Rails 4, database configuration does not specify adapter

    - by Kazmin
    When I start cap production deploy it fails like this: DEBUG [4ee8fa7a] Command: cd /home/deploy/myapp/releases/releases/20131025212110 && (RVM_BIN_PATH=~/.rvm/bin RAILS_ENV= ~/.rvm/bin/myapp_rake assets:precompile ) DEBUG [4ee8fa7a] rake aborted! DEBUG [4ee8fa7a] database configuration does not specify adapter You can see that "RAILS_ENV=" is actually empty and I'm wondering why that might be happening? I assume that this is the reason for the latter error that I don't have a database configuration. The deploy.rb file is below: set :application, 'myapp' set :repo_url, '[email protected]:developer/myapp.git' set :branch, :master set :deploy_to, '/home/deploy/myapp/releases' set :scm, :git set :devpath, "/home/deploy/myapp_development" set :user, "deploy" set :use_sudo, false set :default_env, { rvm_bin_path: '~/.rvm/bin' } set :keep_releases, 5 namespace :deploy do desc 'Restart application' task :restart do on roles(:app), in: :sequence, wait: 5 do # Your restart mechanism here, for example: within release_path do execute " bundle exec thin restart -O -C config/thin/production.yml" end end end after :restart, :clear_cache do on roles(:web), in: :groups, limit: 3, wait: 10 do within release_path do end end end after :finishing, 'deploy:cleanup' end?

    Read the article

  • Static Website - Converting to Dynamic, need to import information from database on different host

    - by gvernold
    This seems really complicated to ask about so I hope someone can help: We have a long time running static website held with a hosting company that provide PHP, Ruby-on-Rails and Drupal/Joomla support. A little limited I know but we got reasonably decent search engine rankings and didn't want them to drop. We have two much more recently created sites on another host written in Python/Django. The original site is now too big to handle statically and we want to create a more dynamic site in its place without changing servers/webhosts. The data we want to provide the 'new' dynamic site is from the same database providing the Django sites. What is the best solution to build the new site with? Is it better to create PHP pages that connect to the database on the other host? Ruby-on-rails seems like a very fast development environment not too dissimilar to Django, would we be able to fetch data from the existing databases into a rails site and use similar urls to our old static pages?

    Read the article

  • Which is better for multi-use auth, MySQL, PostgreSQL, or LDAP?

    - by Fearless
    I want to set up an Oracle Linux 6 server that gives users secure IMAP email (with dovecot), Jabber IM, FTP (with vsftpd), and calDav. However, I want each user logon to be able to authenticate all services (e.g. Joe Smith signs up once for a username and password that he can use for email, ftp, and his calendar). My question is, which database service will be best suited for that application? Also, is there a way to link the database with the preexisting server shell logins (e.g. so I can read the root account's LogCheck emails on a different device)?

    Read the article

  • Continuously checking database from a Windows service

    - by JonF
    I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >