Search Results

Search found 17634 results on 706 pages for 'django multi db'.

Page 304/706 | < Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >

  • A many-to-many relation joined disallows intellisense/lookup in joined table

    - by BerggreenDK
    I want to be able to select a product and retrieve all sub-parts(products) within it. My approach is to find the Product id and then retrieve the list of ProductParts with that as a parent and while fetching those, follow the key back to the Product child to get the name and details of each Part. I was hoping to use something similar to: part.linked_product_id.name I have two tables. One for [Product] and and a relation [ProductPart] that has two FK ID's to [Product] Table Product() { int ID; // (PRIMARY, NOT NULL) String Name; ... details removed for overview purpose... } Table ProductPart() { int Product_ID; // FK (linked with relation to Product/parent) int Part_Product_ID; // FK (linked with relation to Product/childen) ... details removed for overview purpose... } I have checked the SQL-diagram and it shows the two relations (both are one to many) and in my DBML they also looks right. Here is my LINQ to SQL snippet that doesnt work for me... wondering why my joins dont work as supposed. FaultySnippet() { ProductDataContext db = new ProductDataContext(); var list = ( from part in db.ProductParts join prod in db.Products on part.Part_Product_ID equals prod.ID where (part.Product_ID == product_ID) select new { Name = part.Part_Product_ID. ?? // <-- NO details from Joined table? ... rest of properties from ProductPart join... I hoped... } ); }

    Read the article

  • Web page database query optimization

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of database hits. I don't want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimizing at all. Currently the page requires 18 (that's right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (MySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page works - its simply a matter of optimization now (if required). My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens? [Edit] Just to clarify, I have already done the 'obvious' things like using (single and composite) indexes for fields used in the queries. What I haven't yet done is to run a query analyzer to see if my indexes etc are optimal.

    Read the article

  • Entity Framework Duplicate type name within an assembly (6.1.0)

    - by CodeMilian
    I am not sure what is going on but I keep getting the following exception when doing a query. "Duplicate type name within an assembly." I have not been able to find a solution on the web. I had resolved the issue by removing entity framework from all the projects in the solutions and re-installing using nugget. Then all of the sudden the exception is back. I have verified my table schema over and over and find nothing wrong with. This is the query causing the exception. var BaseQuery = from Users in db.Users join UserInstalls in db.UserTenantInstalls on Users.ID equals UserInstalls.UserID join Installs in db.TenantInstalls on UserInstalls.TenantInstallID equals Installs.ID where Users.Username == Username && Users.Password == Password && Installs.Name == Install select Users; var Query = BaseQuery.Include("UserTenantInstalls.TenantInstall"); return Query.FirstOrDefault(); As I mentioned previously the same query was working before. The data has not changed and the code has not changed.

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • [MySQL/PHP] Avoid using RAND()

    - by Andrew Ellis
    So... I have never had a need to do a random SELECT on a MySQL DB until this project I'm working on. After researching it seems the general populous says that using RAND() is a bad idea. I found an article that explains how to do another type of random select. Basically, if I want to select 5 random elements, I should do the following (I'm using the Kohana framework here)? If not, what is a better solution? Thanks, Andrew <?php final class Offers extends Model { /** * Loads a random set of offers. * * @param integer $limit * @return array */ public function random_offers($limit = 5) { // Find the highest offer_id $sql = ' SELECT MAX(offer_id) AS max_offer_id FROM offers '; $max_offer_id = DB::query(Database::SELECT, $sql) ->execute($this->_db) ->get('max_offer_id'); // Check to make sure we're not trying to load more offers // than there really is... if ($max_offer_id < $limit) { $limit = $max_offer_id; } $used = array(); $ids = ''; for ($i = 0; $i < $limit; ) { $rand = mt_rand(1, $max_offer_id); if (!isset($used[$rand])) { // Flag the ID as used $used[$rand] = TRUE; // Set the ID if ($i > 0) $ids .= ','; $ids .= $rand; ++$i; } } $sql = ' SELECT offer_id, offer_name FROM offers WHERE offer_id IN(:ids) '; $offers = DB::query(Database::SELECT, $sql) ->param(':ids', $ids) ->as_object(); ->execute($this->_db); return $offers; } }

    Read the article

  • Question on overview of C# OOP in business WinForms application - scope of Object

    - by TimR
    I may have all this OO completely wrong, but here goes: Ok the scenario is a classic order entry. Customer places an Order which has OrderLineItems of StockItems. Order is entered by Employee. 1) Application starts and asks for login/password 2) Employee selects 'Orders' from Mainmenu form 3) Orders forms opens.... 4) Employee selects Customer 5) Employee selects Stock adds to OrderLineItems 6) Selects second StockItem; add to OrderLineItems 7) Order is committed, [stock decremented, order posted to DB, Order printed] 8) Employee is returned to MainMenu Now with Object scope: 1) Application starts and asks for login/password Is this the best place to make objEmployee, to be kept whilst in this whole Sales application? 2) Employee selects 'Orders' from Mainmenu form 3) Orders forms opens.... *Make objOrderHeader, is objEmployee able to be passed in or is it created here, or re-created here.* 4) Employee selects Customer - adds/edits Customer details if required... Make objCustomer 5) Employee selects Stock adds to OrderLineItems... *Make objStockItem and objOrderLineItem - add to objOrderLineItems_collection* 6) Selects second StockItem; add to OrderLineItems... *Make objStockItem and objOrderLineItem - add to objOrderLineItems_collection* 7) Order is committed, [stock decremented, order posted to DB, Order printed, Order Entered By = EmployeeID] Once posted to Db, all objects now redundant/garbage [except objEmployee?] 8) Employee is returned to MainMenu is objEmployee still valid as an object?

    Read the article

  • Why can't I run a Perl program from TextMate?

    - by JZ
    I'm following a bioinformatics text, and this represents one of my first Perl scripts. While in TextMate, this does not produce any result. Is it functioning? I added "hello world" at the bottom and I don't see that when I run the script in TextMate. What have I done wrong? #!/usr/local/bin/perl -w use lib "/Users/fogonthedowns/myperllib"; use LWP::Simple; use strict; #Set base URL for all eutils my $utils = "http://eutils.ncbi.nlm.nih.gov/entrez/eutils"; my $db = "Pubmed"; my $query ="Cancer+Prostate"; my $retmax = 10; my $esearch = "$utils/esearch.fcgi?" . "db=$db&retmax=$retmax&term="; my $esearch_result = get($esearch.$query); print "ESEARCH RESULT: $esearch_result\n"; print "Using Query: \n$esearch$query\n"; print "hello world\n";

    Read the article

  • Please help clean this loop

    - by Alex Angelini
    I do not code much in Javascript, but I have the following snippet which IMHO looks horrendous and I have to do this nested iteration quite often in my code. Does anyone have a prettier/easier to read solution? function addBrowse(data) { var list = $('<ul></ul>') for(i = 0; i < data.list.length; i++) { var file = list.append('<li class="toLeft">' + data.list[i].name + '</li>') for(j = 0; j < data.list[i].children.length; j++) { var db = file.append('<li>' + data.list[i].children[j].name + '</li>') for(k = 0; k < data.list[i].children[j].children.length; k++) db.append('<li class="toRight">' + data.list[i].children[j].children[k].name + '</li>') } } $('#browse').append(list).show()} Here is a sample data element {"file":"","db":"","tbl":"","page":"browse","list":[ { "name":"/home/alex/GoSource/test1.txt", "children":[ { "name":"go", "children":[ { "name":"validation1", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test2.txt", "children":[ { "name":"go", "children":[ { "name":"validation2", "children":[ ] } ] } ] }, { "name":"/home/alex/GoSource/test3.txt", "children":[ { "name":"go", "children":[ { "name":"validation3", "children":[ ] } ] } ] }]} Thanks a lot

    Read the article

  • What's the difference between these LINQ queries ?

    - by SnAzBaZ
    I use LINQ-SQL as my DAL, I then have a project called DB which acts as my BLL. Various applications then access the BLL to read / write data from the SQL Database. I have these methods in my BLL for one particular table: public IEnumerable<SystemSalesTaxList> Get_SystemSalesTaxList() { return from s in db.SystemSalesTaxLists select s; } public SystemSalesTaxList Get_SystemSalesTaxList(string strSalesTaxID) { return Get_SystemSalesTaxList().Where(s => s.SalesTaxID == strSalesTaxID).FirstOrDefault(); } public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { return Get_SystemSalesTaxList().Where(s => s.ZipCode == strZipCode).FirstOrDefault(); } All pretty straight forward I thought. Get_SystemSalesTaxListByZipCode is always returning a null value though, even when it has a ZIP Code that exists in that table. If I write the method like this, it returns the row I want: public SystemSalesTaxList Get_SystemSalesTaxListByZipCode(string strZipCode) { var salesTax = from s in db.SystemSalesTaxLists where s.ZipCode == strZipCode select s; return salesTax.FirstOrDefault(); } Why does the other method not return the same, as the query should be identical ? Note that, the overloaded Get_SystemSalesTaxList(string strSalesTaxID) returns a record just fine when I give it a valid SalesTaxID. Is there a more efficient way to write these "helper" type classes ? Thanks!

    Read the article

  • Relational MySQL - fetched properties?

    - by Kelso.b
    I'm currently using the following PHP code: // Get all subordinates $subords = array(); $supervisorID = $this->session->userdata('supervisor_id'); $result = $this->db->query(sprintf("SELECT * FROM users WHERE supervisor_id=%d AND id!=%d",$supervisorID, $supervisorID)); $user_list_query = 'user_id='.$supervisorID; foreach($result->result() as $user){ $user_list_query .= ' OR user_id='.$user->id; $subords[$user->id] = $user; } // Get Submissions $submissionsResult = $this->db->query(sprintf("SELECT * FROM submissions WHERE %s", $user_list_query)); $submissions = array(); foreach($submissionsResult->result() as $submission){ $entriesResult = $this->db->query(sprintf("SELECT * FROM submittedentries WHERE timestamp=%d", $submission->timestamp)); $entries = array(); foreach($entriesResult->result() as $entries) $entries[] = $entry; $submissions[] = array( 'user' => $subords[$submission->user_id], 'entries' => $entries ); $entriesResult->free_result(); } Basically I'm getting a list of users that are subordinates of a given supervisor_id (every user entry has a supervisor_id field), then grabbing entries belonging to any of those users. I can't help but think there is a more elegant way of doing this, like SELECT FROM tablename where user->supervisor_id=2222 Is there something like this with PHP/MySQL? Should probably learn relational databases properly sometime. :(

    Read the article

  • DBD::Oracle and utf8 issue

    - by goe
    Hi All, I have a problem where my perl code using the latest DBD::Oracle on perl v5.8.8 throws an exception on me when I try to insert characters like 'ñ'. Exception: DBD::Oracle::db do failed: ORA-01756: quoted string not properly terminated (DBD ERROR: OCIStmtPrepare) My $ENV{NLS_LANG} is set to 'AMERICAN_AMERICA.AL32UTF8' These are the DB params based on "SELECT * from NLS_DATABASE_PARAMETERS" 1 NLS_LANGUAGE AMERICAN 2 NLS_TERRITORY AMERICA 3 NLS_CURRENCY $ 4 NLS_ISO_CURRENCY AMERICA 5 NLS_NUMERIC_CHARACTERS ., 6 NLS_CHARACTERSET AL32UTF8 7 NLS_CALENDAR GREGORIAN 8 NLS_DATE_FORMAT DD-MON-RR 9 NLS_DATE_LANGUAGE AMERICAN 10 NLS_SORT BINARY 11 NLS_TIME_FORMAT HH.MI.SSXFF AM 12 NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM 13 NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR 14 NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR 15 NLS_DUAL_CURRENCY $ 16 NLS_COMP BINARY 17 NLS_LENGTH_SEMANTICS BYTE These are perl params based on "$db-ora_nls_parameters()" $VAR1 = { 'NLS_LANGUAGE' => 'AMERICAN', 'NLS_TIME_TZ_FORMAT' => 'HH.MI.SSXFF AM TZR', 'NLS_SORT' => 'BINARY', 'NLS_NUMERIC_CHARACTERS' => '.,', 'NLS_TIME_FORMAT' => 'HH.MI.SSXFF AM', 'NLS_ISO_CURRENCY' => 'AMERICA', 'NLS_COMP' => 'BINARY', 'NLS_CALENDAR' => 'GREGORIAN', 'NLS_DATE_FORMAT' => 'DD-MON-RR', 'NLS_DATE_LANGUAGE' => 'AMERICAN', 'NLS_TIMESTAMP_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM', 'NLS_TERRITORY' => 'AMERICA', 'NLS_LENGTH_SEMANTICS' => 'BYTE', 'NLS_NCHAR_CHARACTERSET' => 'AL16UTF16', 'NLS_DUAL_CURRENCY' => '$', 'NLS_TIMESTAMP_TZ_FORMAT' => 'DD-MON-RR HH.MI.SSXFF AM TZR', 'NLS_NCHAR_CONV_EXCP' => 'FALSE', 'NLS_CHARACTERSET' => 'AL32UTF8', 'NLS_CURRENCY' => '$' }; Here are some other strange facts: If I set NLS_LANG to ‘'AMERICAN_AMERICA.UTF8’ the insert executes fine with ‘ñ’ character. If I leave NLS_LANG as ‘'AMERICAN_AMERICA.AL32UTF8' but use ‘Ñ’ the insert will run fine as well.

    Read the article

  • Using Subsonic 3.0 Advanced Templates

    - by umit
    Hi all, I've been trying to use Subsonic Advanced Templates in a project for a while but most of the time I find myself writing a Stored Procedure as I can't find a proper way of doing it in code. Subsonic created corresponding objects for my DB tables and for foreign keys it created IQueryable fields inside each object. These fields are not loaded by default and a new SQL query is executed when you access them. 1- Is there a way to get all data in one query (deep load)? Also these fields can not be assigned. So when I want to create an object in a maintenance page, I can't put all the data into this object before saving it in DB: Post post = new Post(); //get photos for this post IList<PostPhoto> postPhotos = GetPostPhotos(); post.PostPhotos = postPhotos; 2- Is it possible to have one Post object with all fields set from user input? Think of the Post object above and assume I've successfully assigned its fields. Now I need to save it to the DB. 3- Is using BatchQuery the only way to do it in one query? If I have 4 photos in PostPhotos field; 2 of them previously saved and 2 of them new, can I use the Update method to handle both the adding and updating of these photos? Any ideas or links are appreciated. Cheers...

    Read the article

  • EF4 Code First - Many to many relationship issue

    - by Yngve B. Nilsen
    Hi! I'm having some trouble with my EF Code First model when saving a relation to a many to many relationship. My Models: public class Event { public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Tag> Tags { get; set; } } public class Tag { public int Id { get; set; } public string Name { get; set; } public virtual ICollection<Event> Events { get; set; } } In my controller, I map one or many TagViewModels into type of Tag, and send it down to my servicelayer for persistence. At this time by inspecting the entities the Tag has both Id and Name (The Id is a hidden field, and the name is a textbox in my view) The problem occurs when I now try to add the Tag to the Event. Let's take the following scenario: The Event is already in my database, and let's say it already has the related tags C#, ASP.NET If I now send the following list of tags to the servicelayer: ID Name 1 C# 2 ASP.NET 3 EF4 and add them by first fetching the Event from the DB, so that I have an actual Event from my DbContext, then I simply do myEvent.Tags.Add to add the tags.. Problem is that after SaveChanges() my DB now contains this set of tags: ID Name 1 C# 2 ASP.NET 3 EF4 4 C# 5 ASP.NET This, even though my Tags that I save has it's ID set when I save it (although I didn't fetch it from the DB)

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • How to use the textbox value to fetch the records and to display it in the same page

    - by Aruna
    Hi, i am having a Form like <script language="javascript" type="text/javascript"> function formfn() { var str = document.getElementById('TitleSearch').value; alert(str);//displays the keyword like database } </script> <form name="f1" method="post"> <p><label for="TitleSearch">Keywords:</label> <input title="Keyword" size="40" value="" id="TitleSearch"></p> <p> <input type="submit" id="im-search" value="Search" name="im-search" onClick="formfn();"></p> </form> I am having a page where in the top i have this form on search it has to take the value of the textbox TitleSearch and to use this to retrieve the records matching by <?php $db =& JFactory::getDBO(); $query = 'SELECT * from #__chronoforms_Publications where keyword like "%valueretrieved%" '; $db->setQuery($query); $rows = $db->loadObjectList(); //echo $rows; ?> Once the search button is clicked the text box value of the keyword is retrieved . I am trying to use this value in the select query to fetch the records and to display in the same page.. How to do so..

    Read the article

  • SQL wont work? It doesn't come up with errors either

    - by Stefan
    Hey there, I have php function which checks to see if variables are set and then adds them onto my sql query. However I am don't seem to be getting any results back!? $where_array = array(); if (array_key_exists("location", $_GET)) { $location = addslashes($_GET['location']); $where_array[] = "`mainID` = '".$location."'"; } if (array_key_exists("gender", $_GET)) { $gender = addslashes($_GET["gender"]); $where_array[] = "`gender` = '".$gender."'"; } if (array_key_exists("hair", $_GET)) { $hair = addslashes($_GET["hair"]); $where_array[] = "`hair` = '".$hair."'"; } if (array_key_exists("area", $_GET)) { $area = addslashes($_GET["area"]); $where_array[] = "`locationID` = '".$area."'"; } $where_expr = ''; if ($where_array) { $where_expr = "WHERE " . implode(" AND ", $where_array); } $sql = "SELECT `postID` FROM `posts` ". $where_expr; $dbi = new db(); $result = $dbi->query($sql); $r = mysql_fetch_row($result); I'm trying to call the data after in a list like so: $dbi = new db(); $offset = ($currentpage - 1) * $rowsperpage; // get the info from the db $sql .= " ORDER BY `time` DESC LIMIT $offset, $rowsperpage"; $result = $dbi->query($sql); // while there are rows to be fetched... while ($row = mysql_fetch_object($result)){ // echo data echo $row['text']; } // end while Anyone got any ideas why I am not retrieving any data? -Stefan

    Read the article

  • Makefile; mirroring a growing tree through a process

    - by Martineau
    I would like to periodically mirror a growing tree, say, from $in to $out, doing a process in between (saving the only file header). As; #!/bin/bash in=./segd out=./db for f in `find $in -name "*.segd"`;do # Deduct output (dir + name) d=`dirname $f|perl -pe 's!'$in'!'$out'!'` n=`basename $f|perl -pe 's!$!_hdr!'` if [ ! -e $d/$n ]; then [ ! -d $d ] && mkdir -p $d; printf "From %s now build %s\n" $f "$d/$n" # Do something, whathever. For example e.g; dd if=$f bs=32 count=1 conv=swab 2>/dev/null|od -x > $d/$n fi done That is about fair. However; to be more robust, for a better synchronization (say if a source file did change or whatever), I would like to use a Makefile, as in; HDR := $(patsubst ./segd/%.segd,./db/%.segd_hdr,$(wildcard ./segd/*.segd)) all: ${HDR} db/%.segd_hdr: ./segd/%.segd echo "Doing" dd if=$< bs=32 count=1 conv=swab 2>/dev/null|od -x > $@ My problem; I cannot code this Makefile to "dive" more deeply within the source ./segd tree. How can we do it and is there a way ? Many thanks for your kind recommendations. PS: The idea will be to later rsync the (smaller) destination tree over a sat connection.

    Read the article

  • A C# Refactoring Question...

    - by james lewis
    I came accross the following code today and I didn't like it. It's fairly obvious what it's doing but I'll add a little explanation here anyway: Basically it reads all the settings for an app from the DB and the iterates through all of them looking for the DB Version and the APP Version then sets some variables to the values in the DB (to be used later). I looked at it and thought it was a bit ugly - I don't like switch statements and I hate things that carry on iterating through a list once they're finished. So I decided to refactor it. My question to all of you is how would you refactor it? Or do you think it even needs refactoring at all? Here's the code: using (var sqlConnection = new SqlConnection(Lfepa.Itrs.Framework.Configuration.ConnectionString)) { sqlConnection.Open(); var dataTable = new DataTable("Settings"); var selectCommand = new SqlCommand(Lfepa.Itrs.Data.Database.Commands.dbo.SettingsSelAll, sqlConnection); var reader = selectCommand.ExecuteReader(); while (reader.Read()) { switch (reader[SettingKeyColumnName].ToString().ToUpper()) { case DatabaseVersionKey: DatabaseVersion = new Version(reader[SettingValueColumneName].ToString()); break; case ApplicationVersionKey: ApplicationVersion = new Version(reader[SettingValueColumneName].ToString()); break; default: break; } } if (DatabaseVersion == null) throw new ApplicationException("Colud not load Database Version Setting from the database."); if (ApplicationVersion == null) throw new ApplicationException("Colud not load Application Version Setting from the database."); }

    Read the article

  • Getting a UIImage from MySQL using PHP and jSON

    - by Daniel
    I'm developing a little news reader that retrieves the info from a website by doing a POST request to a URL. The response is a jSON object with the unread-news. E.g. the last news on the App has a timeStamp of "2013-03-01". When the user refreshes the table, it POSTS "domain.com/api/api.php?newer-than=2013-03-01". The api.php script goes to the MySQL database and fetches all the news posted after 2013-03-01 and prints them json_encoded. This is // do something to get the data in an array echo $array_of_fetched_data; for example the response would be [{"title": "new app is coming to the market", "text": "lorem ipsum dolor sit amet...", image: XXX}] the App then gets the response and parses it, obtaining an NSDictionary and adds it to a Core Data db. NSDictionary* obtainedNews = [NSJSONSerialization JSONObjectWithData:responseData options:kNilOptions error:&error]; My question is: How can I add an image to the MySQL database, store it, pass it using jSON trough a POST HTTP Request and then interpret it as an UIImage. It's clear that to store an UIImage in CoreData, they must be transform into/from NSData. How can I pass the NSData back and forth to a MySQL db using php and jSON? How should I upload the image to the db? (Serialized, as a BLOB, etc)

    Read the article

  • How to extract the Sql Command from a Complied Linq Query

    - by Harry
    In normal (not compiled) Linq to Sql queries you can extract the SQLCommand from the IQueryable via the following code: SqlCommand cmd = (SqlCommand)table.Context.GetCommand(query); Is it possible to do the same for a compiled query? The following code provides me with a delegate to a compiled query: private static readonly Func<Data.DAL.Context, string, IQueryable<Word>> Query_Get = CompiledQuery.Compile<Data.DAL.Context, string, IQueryable<Word>>( (context, name) => from r in context.GetTable<Word>() where r.Name == name select r); When i use this to generate the IQueryable and attempt to extract the SqlCommand it doesn't seem to work. When debugging the code I can see that the SqlCommand returned has the 'very' useful CommandText of 'SELECT NULL AS [EMPTY]' using (var db = new Data.DAL.Context()) { IQueryable<Word> query = Query_Get(db, "word"); SqlCommand cmd = (SqlCommand)db.GetCommand(query); Console.WriteLine(cmd != null ? cmd.CommandText : "Command Not Found"); } I can't find anything in google about this particular scenario, as no doubt it is not a common thing to attempt... So.... Any thoughts?

    Read the article

  • Linq and returning types

    - by cdotlister
    My GUI is calling a service project that does some linq work, and returns data to my GUI. However, I am battling with the return type of the method. After some reading, I have this as my method: public static IEnumerable GetDetailedAccounts() { IEnumerable accounts = (from a in Db.accounts join i in Db.financial_institution on a.financial_institution.financial_institution_id equals i.financial_institution_id join acct in Db.z_account_type on a.z_account_type.account_type_id equals acct.account_type_id orderby i.name select new {account_id = a.account_id, name = i.name, description = acct.description}); return accounts; } However, my caller is battling a bit. I think I am screwing up the return type, or not handling the caller well, but it's not working as I'd hoped. This is how I am attempting to call the method from my GUI. IEnumerable accounts = Data.AccountService.GetDetailedAccounts(); Console.ForegroundColor = ConsoleColor.Green; Console.WriteLine("Accounts:"); Console.ForegroundColor = ConsoleColor.White; foreach (var acc in accounts) { Console.WriteLine(string.Format("{0:00} {1}", acc.account_id, acc.name + " " + acc.description)); } int accountid = WaitForKey(); However, my foreach, and the acc - isn't working. acc doesn't know about the name, description and id that I setup in the method. Am I at least close to being right?

    Read the article

  • import csv file/excel into sql database asp.net

    - by kiev
    Hi everyone! I am starting a project with asp.net visual studio 2008 / SQL 2000 (2005 in future) using c#. The tricky part for me is that the existing DB schema changes often and the import files columns will all have to me matched up with the existing db schema since they may not be one to one match on column names. (There is a lookup table that provides the tables schema with column names I will use) I am exploring different ways to approach this, and need some expert advice. Is there any existing controls or frameworks that I can leverage to do any of this? So far I explored FileUpload .NET control, as well as some 3rd party upload controls to accomplish the upload such as SlickUpload but the files uploaded should be < 500mb Next part is reading of my csv /excel and parsing it for display to the user so they can match it with our db schema. I saw CSVReader and others but for excel its more difficult since I will need to support different versions. Essentially The user performing this import will insert and/or update several tables from this import file. There are other more advance requirements like record matching but and preview of the import records, but I wish to get through understanding how to do this first. Update: I ended up using csvReader with LumenWorks.Framework for uploading the csv files.

    Read the article

< Previous Page | 300 301 302 303 304 305 306 307 308 309 310 311  | Next Page >