Search Results

Search found 5233 results on 210 pages for 'records'.

Page 185/210 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Why doesn't this PHP execute?

    - by cam
    I copied the code from this site exactly: http://davidwalsh.name/web-service-php-mysql-xml-json as follows, /* require the user as the parameter */ if(isset($_GET['user']) && intval($_GET['user'])) { /* soak in the passed variable or set our own */ $number_of_posts = isset($_GET['num']) ? intval($_GET['num']) : 10; //10 is the default $format = strtolower($_GET['format']) == 'json' ? 'json' : 'xml'; //xml is the default $user_id = intval($_GET['user']); //no default /* connect to the db */ $link = mysql_connect('localhost','username','password') or die('Cannot connect to the DB'); mysql_select_db('db_name',$link) or die('Cannot select the DB'); /* grab the posts from the db */ $query = "SELECT post_title, guid FROM wp_posts WHERE post_author = $user_id AND post_status = 'publish' ORDER BY ID DESC LIMIT $number_of_posts"; $result = mysql_query($query,$link) or die('Errant query: '.$query); /* create one master array of the records */ $posts = array(); if(mysql_num_rows($result)) { while($post = mysql_fetch_assoc($result)) { $posts[] = array('post'=>$post); } } /* output in necessary format */ if($format == 'json') { header('Content-type: application/json'); echo json_encode(array('posts'=>$posts)); } else { header('Content-type: text/xml'); echo '<posts>'; foreach($posts as $index => $post) { if(is_array($post)) { foreach($post as $key => $value) { echo '<',$key,'>'; if(is_array($value)) { foreach($value as $tag => $val) { echo '<',$tag,'>',htmlentities($val),'</',$tag,'>'; } } echo '</',$key,'>'; } } } echo '</posts>'; } /* disconnect from the db */ @mysql_close($link); } And the php doesn't execute, it just displays as plain text. What's the dealio? The host supports PHP, I use it to run a Wordpress blog and other things.

    Read the article

  • Data Warehouse: Modelling a future schedule

    - by Pat
    I'm creating a DW that will contain data on financial securities such as bonds and loans. These securities are associated with payment schedules. For example, a bond could pay quarterly, while a mortage would usually pay monthly (sometimes biweekly). The payment schedule is created when the security is traded and, in the majority of cases, will remain unchanged. However, the design would need to accomodate those cases where it does change. I'm currently attempting to model this data and I'm having difficulty coming up with a workable design. One of the most commonly queried fields is "next payment date". Users often want to know when a security will pay next. Therefore, I want to make it as easy as possible for them to get the next payment date and amount for each security. Also, users often run historical queries in which case they'd want the next payment date and amount as of a specific point in time. For example, they may want to look back at 1/31/09 and query the next payment dates (which would usually be in February 2009 for mortgages). It's also common that they want to query a security's entire payment schedule, which might consist of 360 records (30 year mortgage x 12 payments/year). Since the next payment date and amount would be changing each month or even biweekly, these fields wouldn't seem to fit into a slow-changing dimension very well. It would probably make more sense to use a fact table, but I'm unsure of how to model it. Any ideas would be greatly appreciated.

    Read the article

  • How to convert a list object to bigdecimal in prepared statement?

    - by user1103504
    I am using prepared statement for bulk insertion of records. Iam iterating a list which contains values and their dataTypes differ. One of the data type is BigDecimal and when i try to set calling preparedstatement, it is throwing null pointer exception. My code int count = 1; for (int j = 0; j < list.size(); j++) { if(list.get(j) instanceof Timestamp) { ps.setTimestamp(count, (Timestamp) list.get(j)); } else if(list.get(j) instanceof java.lang.Character) { ps.setString(count, String.valueOf(list.get(j))); } else if(list.get(j) instanceof java.math.BigDecimal) { ps.setBigDecimal(count, (java.math.BigDecimal)list.get(j)); } else { ps.setObject(count, list.get(j)); } count++; } I tried 2 ways to convert, casting the object and tried to create a new object of type BigDecimal ps.setBigDecimal(count, new BigDecimal(list.get(j).toString)); both donot solve my problem. It is throwing null pointer exception. help is appreciated. Thanks

    Read the article

  • Specific Shopping Cart Recommendations

    - by Dean J
    I'm trying to suggest a solution for a friend who owns an existing web shop. The current solution isn't cutting it. The new solution needs to have a few things that look like they're enterprise-only if I go with Magento, and $12k a year for a store with maybe $20k in stock just doesn't work. The site should have items, which have one or more categories. Each category may have a parent category. Items have MSRP, and a discount rate by supplier, brand, and sometimes additional discount by product. When a user buys something, it should automatically setup a shipping label with UPS or USPS, depending on user's choice, and build two invoices; one to go in the box, one to go into records. This is crucial; it's low profit per item, so it needs to minimize labor here. Need to be able to have sales (limited by time), discount codes/coupon codes. Ideally would have private sales and/or members-only rates as well. It needs a payment gateway; Paypal/GCheckout-only isn't going to fly. Must be able to accept Visa/MC. Suggestions? I'm debating just building this myself in Java or PHP, but wanted to point my friend to a reasonable-cost solution that already exists if I can. This all seems pretty straightforward to code, save working with the UPS/USPS/Visa/MC APIs, and doing CSS for it.

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • How to do an additional search on archive in rails if record not found, by extending model?

    - by Nick Gorbikoff
    Hello, I was wondering if somebody knows an elegant solution to the following: Suppose I have a table that holds orders, with a bunch of data. So I'm at 1M records, and searches begin to take time. So I want to speed it up by archiving some data that is more than 3 years old - saving it into a table called orders-archive, and then purging them from the orders table. So if we need to research something or customer wants to pull older information - they still can, but 99% of the lookups are done on the orders no older than a year and a half - so there is no reason to keep looking through older data all the time. These move & purge operations can be then croned to be done on a weekly basis. I already did some tests and I know that I will slash my search times by about 4 times. So far so good, right? However I was thinking about how to implement older archival lookups and the only reasonable thing I can think of is some sort of if-else If not found in orders, do a search in orders-archive. However - I have about 20 tables that I want to archive and god knows how many searches / finds are done through out the code, that I don't want to modify. So I was wondering if there is an elegant rails-way solution to this problem, by extending a model somehow? Has anyone dealt with similar case before? Thank you.

    Read the article

  • Utilizing a Queue

    - by Nathan
    I'm trying to store records of transactions all together and by category for the last 1, 7, 30 or 360 days. I've tried a couple things, but they've brutally failed. I had an idea of using a queue with 360 values, one for each day, but I don't know enough about queue's to figure out how that would work. Input will be an instance of this class: class Transaction { public string TotalEarned { get; set; } public string TotalHST { get; set; } public string TotalCost { get; set; } public string Category { get; set; } } New transactions can occur at any time during the day, and there could be as many as 15 transactions in a day. My program is using a plain text file as external storage, but how I load it depends on how I decide to store this data. What would be the best way to do this?

    Read the article

  • Why do I get null objects in a many-to-many bag?

    - by Jim Geurts
    I have a bag defined for a many-to-many list: <class name="Author" table="Authors"> <id name="Id" column="AuthorId"> <generator class="identity" /> </id> <property name="Name" /> <bag name="Books" table="Author_Book_Map" where="IsDeleted=0" fetch="join"> <key column="AuthorId" /> <many-to-many class="Book" column="BookId" where="IsDeleted=0" /> </bag> </class> If I return all author objects using something like the following, I will get what initially appeared to be duplicate Author records: Session.Query<Author>().List<Author>() The extra author objects are created when an author is mapped to Book objects that have IsDeleted = 1 and IsDeleted = 0. Rather than creating one Author object with an enumerable that contains only the books with IsDeleted = 0, it will create two author objects. The first author object has a Books enumerable that contains books with IsDeleted = 0. The second author object will contain an enumerable of null book objects. Similarly, if an object only has one book map, and that map points to a book with IsDeleted = 1, then an author object is returned with a Books collection having one null object. I'm thinking part of the problem stems from the map table objects linking to rows that satisfy the where condition on the bag object but do not meet the many-to-many where condition. This is happening with NHibernate version 3.0.0.4980. Is this a configuration issue or something else?

    Read the article

  • MVC design for archived data view

    - by Hemant Tank
    Implementation of a standard archive process in ASP.Net MVC. Backend SQL Server 2005 We've an existing web app built in MVC. We've an Entity "Claim" and it has some child entities like ClaimDetails, Files, etc... A pretty standard setup in DB. Each entity has its own table and are linked via FK. Now, we need to have an "Archive" feature in web app which will allow admin to archive a Claim and its child entities. An archived Claim shud become readonly when visited again. Here're some points on which I need your valued opinion - To keep it simple and scalable (for a few million records) for now we plan to simply add a bit field "Archived" to the Claim table in db. And change the behavior accordingly in the web app. We've a 'Manage claim' page which renders a bunch of diff views for Claim and its child entities. Now, for a readonly view we can either use the same views or have a separate set of views. What do you suggest? At controller level, we can identify archived claim and select which view to render. At model level, though it'd be great to be able to use the same model used for Manage Claim - but it might not get us the "text" of some lookup fields. For example, Claim.BrandId is rendered as a dropdown in Manage claim (requires only BrandId) but for readonly view we need 'BrandText'. Any existing ref or architecture level example would be great. Here's my prev SO post but its more about db level changes: Design a process to archive data (SQL Server 2005) Thank you.

    Read the article

  • Using Enum in Hibernate causes select followed by an update statement

    - by Leonardo
    Hi all, I have a mapped entity wich has an enum property. By loking at log file, whenever I run a select statement on such entity, the result is an immediately following update. For example if my result set contains 100 records, then I have: [INFO org... select...] [INFO org... update... where id=?] [INFO org... update... where id=?] .... repeated 100 times If I mark the property as update=false the problem disappear. The enum is assigned trough an enum converter class, which I copied from a well known book. So I don't know if I just copy and paste the code. Here it is how is declared on hbm file. <typedef class="mypackage.HbnEnumConverter" name="the_type"> <param name="enumClassname">mypackage.TheType</param> </typedef> Can you point out a direction to investigate this ? Beside, what are the consequences of having update=false on hibernate field ? thanks

    Read the article

  • Doctrine: Unable to execute either CROSS JOIN or SELECT FROM Table1, Table2?

    - by ropstah
    Using Doctrine I'm trying to execute either a 1. CROSS JOIN statement or 2. a SELECT FROM Table1, Table2 statement. Both seem to fail. The CROSS JOIN does execute, however the results are just wrong compared to executing in Navicat. The multiple table SELECT doesn't event execute because Doctrine automatically tries to LEFT JOIN the second table. The cross join statement (this runs, however it doesn't include the joined records where the refClass User_Setting doesn't have a value): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u CROSS JOIN Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key)); And the select from multiple tables (this doesn't event run. It does not spot the many-many relationship between User and Setting in the first ->from() part and throws an exception: "User_Setting" with an alias of "us" in your query does not reference the parent component it is related to.): $q = new Doctrine_RawSql(); $q->select('{s.*}, {us.*}') ->from('User u, Setting s LEFT JOIN User_Setting us ON us.usr_auto_key = u.usr_auto_key AND us.set_auto_key = s.set_auto_key') ->addComponent('u', 'User u') ->addComponent('s', 'Setting s') ->addComponent('us', 'u.User_Setting us') ->where('s.sct_auto_key = ? AND u.usr_auto_key = ?',array(1, $this->usr_auto_key));

    Read the article

  • mysql to excel genration using php

    - by pmms
    <?php // DB Connection here mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $select = "SELECT * FROM jos_users "; $export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; ? the above code is used for genrating mysql to excel sheet but we are getting following error the file youare trying to open, 'users.xls',is in a different format than specified by the file extension. verify that the file is not corrupted and is from a trusted source before opening the file. do you want to open the file now?

    Read the article

  • Problem with incomplete input when using Attoparsec

    - by Dan Dyer
    I am converting some functioning Haskell code that uses Parsec to instead use Attoparsec in the hope of getting better performance. I have made the changes and everything compiles but my parser does not work correctly. I am parsing a file that consists of various record types, one per line. Each of my individual functions for parsing a record or comment works correctly but when I try to write a function to compile a sequence of records the parser always returns a partial result because it is expecting more input. These are the two main variations that I've tried. Both have the same problem. items :: Parser [Item] items = sepBy (comment <|> recordType1 <|> recordType2) endOfLine For this second one I changed the record/comment parsers to consume the end-of-line characters. items :: Parser [Item] items = manyTill (comment <|> recordType1 <|> recordType2) endOfInput Is there anything wrong with my approach? Is there some other way to achieve what I am attempting?

    Read the article

  • SQL with HAVING and temp table not working in Rails

    - by chrisrbailey
    I can't get the following SQL query to work quite right in Rails. It runs, but it fails to do the "HAVING row_number = 1" part, so I'm getting all the records, instead of just the first record from each group. A quick description of the query: it is finding hotel deals with various criteria, and in particular, priortizing them being paid, and then picking the one with the highest dealrank. So, if there are paid deal(s), it'll take the highest one of those (by dealrank) first, if no paid deals, it takes the highest dealrank unpaid deal for each hotel. Using MAX(dealrank) or something similar does not work as a way to pick off the first row of each hotel group, which is why I have the enclosing temptable and the creation of the row_number column. Here's the query: SELECT *, @num := if(@hid = hotel_id, @num + 1, 1) as row_number, @hid := hotel_id as dummy FROM ( SELECT hotel_deals.*, affiliates.cpc, (CASE when affiliates.cpc 0 then 1 else 0 end) AS paid FROM hotel_deals INNER JOIN hotels ON hotels.id = hotel_deals.hotel_id LEFT OUTER JOIN affiliates ON affiliates.id = hotel_deals.affiliate_id WHERE ((hotel_deals.percent_savings = 0) AND (hotel_deals.booking_deadline = ?)) GROUP BY hotel_deals.hotel_id, paid DESC, hotel_deals.dealrank ASC) temptable HAVING row_number = 1 I'm currently using Rails' find_by_sql to do this, although I've also tried putting it into a regular find using the :select, :from, and :having parts (but :having won't get used unless you have a :group as well). If there is a different way to write this query, that'd be good to know too. I am using Rails 2.3.5, MySQL 5.0.x.

    Read the article

  • iPhone Development - CoreData runtime error

    - by Mustafa
    I'm facing a strange CoreData issue. Here's the log: 2010-04-07 15:59:36.913 MyProject[263:207] <MyEntity: 0x180370> (entity: MyEntity; id: 0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> ; data: <fault>) 2010-04-07 15:59:36.918 MyProject[263:207] *** Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'The NSManagedObject with ID:0x17e890 <x-coredata://0F55C533-41BD-4F09-9CCA-0CB304CAB065/MyEntity/p380> has been invalidated.' I have a hierarchy of UITableViewControllers that use NSFetchedResultsController to populate the table, and when a particular row is selected, the detail view is shown. UITableView (MyMainEntity) UITableView (MyEntity) UITableView (MyEntity) detail view Both MyMainEntity UITableView and MyEntity UITableView use NSFetchedResultsController to show the records. Sometimes it crashes when i'm scrolling the tableView, and sometimes it crashes when i try to open the detail view. I can navigate to the MyEntity detail view multiple times before application crashes. What does this error mean? and how can i fix it!?

    Read the article

  • transaction handling in dataset based insert/update in c#

    - by user3703611
    I am trying to insert bulk records in a sql server database table using dataset. But i am unable to do transaction handling. Please help me to apply transaction handling in below code. I am using adapter.UpdateCommand.Transaction = trans; but this line give me an error of Object reference not set to an instance of an object. Code: string ConnectionString = "server=localhost\\sqlexpress;database=WindowsApp;Integrated Security=SSPI;"; SqlConnection conn = new SqlConnection(ConnectionString); conn.Open(); SqlTransaction trans = conn.BeginTransaction(IsolationLevel.Serializable); SqlDataAdapter adapter = new SqlDataAdapter("SELECT * FROM Test ORDER BY Id", conn); SqlCommandBuilder builder = new SqlCommandBuilder(adapter); adapter.UpdateCommand.Transaction = trans; // Create a dataset object DataSet ds = new DataSet("TestSet"); adapter.Fill(ds, "Test"); // Create a data table object and add a new row DataTable TestTable = ds.Tables["Test"]; for (int i=1;i<=50;i++) { DataRow row = TestTable.NewRow(); row["Id"] = i; TestTable .Rows.Add(row); } // Update data adapter adapter.Update(ds, "Test"); trans.Commit(); conn.Close();

    Read the article

  • PHP sql with foreach loop variable problem

    - by anthony
    This is really getting frustrating. I have a text file that I'm reading for a list of part numbers that goes into an array. I'm using the following foreach function to search a database for matching numbers. $file = file('parts_array.txt'); foreach ($file as $newPart) { $sql = "SELECT products_sku FROM products WHERE products_sku='" . $newPart . "'"; $rs = mysql_query($sql); $num_rows = mysql_num_rows($rs); echo $num_rows; echo "<br />"; } The problem is I'm getting 0 rows returned from mysql_num_rows. I can type the sql statement without the variable and it works perfectly. I can even echo out the sql statement from this script, copy and paste the statement from the browser and it works. But, for some reason I'm not getting any records when I'm using the variable. I've used variables in sql statements tons of times, but this really has me stumped.

    Read the article

  • uploading database file in assets not returning a record

    - by Alexander
    I have a problem with a database file not being read I have added the database file in assets called mydb but when i run my code it says its not being located. It is calling this toast Toast.makeText(this, "No contact found", Toast.LENGTH_LONG).show(); This is being called because no records are being returned. This is an example form Android Application Development book. public class DatabaseActivity extends Activity { /** Called when the activity is first created. */ TextView quest, response1, response2; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); TextView quest = (TextView) findViewById(R.id.quest); try { String destPath = "/data/data/" + getPackageName() + "/databases/MyDB"; File f = new File(destPath); if (!f.exists()) { CopyDB( getBaseContext().getAssets().open("mydb"), new FileOutputStream(destPath)); } } catch (FileNotFoundException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } DBAdapter db = new DBAdapter(this); //---get a contact--- db.open(); Cursor c = db.getContact(2); if (c.moveToFirst()) DisplayContact(c); else Toast.makeText(this, "No contact found", Toast.LENGTH_LONG).show(); db.close(); } public void CopyDB(InputStream inputStream, OutputStream outputStream) throws IOException { //---copy 1K bytes at a time--- byte[] buffer = new byte[1024]; int length; while ((length = inputStream.read(buffer)) > 0) { outputStream.write(buffer, 0, length); } inputStream.close(); outputStream.close(); } public void DisplayContact(Cursor c) { quest.setText(String.valueOf(c.getString(1))); //quest.setText(String.valueOf("this is a text string")); } }

    Read the article

  • Do you catch expected exceptions in the controller or business service of your asp.net mvc application

    - by Pascal
    I am developing an asp.net mvc application where user1 could delete data records which were just loaded before by user2. User2 either changes this non-existent data record (Update) or is doing an insert with this data in another table that a foreign-key constraint is violated. Where do you catch such expected exceptions? In the Controller of your asp.net mvc application or in the business service? Just a sidenote: I only catch the SqlException here if its a ForeignKey constraint exception to tell the user that another user has deleted a certain parent record and therefore he can not create the testplan. But this code is not fully implemented yet! Controller:   public JsonResult CreateTestplan(Testplan testplan)   {    bool success = false;    string error = string.Empty;    try   {    success = testplanService.CreateTestplan(testplan);    }   catch (SqlException ex)    {    error = ex.Message;    }    return Json(new { success = success, error = error }, JsonRequestBehavior.AllowGet);   } OR Business service: public Result CreateTestplan(Testplan testplan) { Result result = new Result(); try { using (var con = new SqlConnection(_connectionString)) using (var trans = new TransactionScope()) { con.Open(); _testplanDataProvider.AddTestplan(testplan); _testplanDataProvider.CreateTeststepsForTestplan(testplan.Id, testplan.TemplateId); trans.Complete(); result.Success = true; } } catch (SqlException e) { result.Error = e.Message; } return result; } then in the Controller: public JsonResult CreateTestplan(Testplan testplan)   {    Result result = testplanService.CreateTestplan(testplan);       return Json(new { success = result.success, error = result.error }, JsonRequestBehavior.AllowGet);   }

    Read the article

  • Seperating and counting CSV entries from database (Access/ASp Classic)

    - by Katherine Perotta
    hey i could really use some help with this one. I have a faq with multiple "tags" and I would like to separate and count them. They are currently in the database as follows: ID-----------------TITLE--------------CONTENT-----------TAGS Sample Records: 1---------------sampletitle 1---------amplecontent--------tag1,tag2,tag3 2---------------sampletitle 2---------moresamplestuff-----tag3,tag4,tag5 How could I go about counting the number of times each tag is used? In the end, would it be easier to just create a separate table called TAGS, with a single tag corresponding to a single ID in FAQ? The only reason I don't prefer doing something like that is because I have so much data already it would take quite a while. However, if there's no alternative or if its easier than doing string parsing like that, im willing to do it. The goal is to display each unique tag and the number of times it is used. Would it be better to do the heavy lifting in the database or ASP? I have gotten as far as getting a list of all tags and displaying them in an array (with each tag separated). So at this point what I need to do is count each value and then remove the duplicates (while preserving the count number somewhere). This is in ASP classic using an Access database. Thanks!

    Read the article

  • PDO closeCursor Error

    - by Metropolis
    Hey Everyone, I currently have a database layer that I wrote myself and I have been using it now for over a year without any problems. The database class uses PDO, and there are two different databases that I regularly connect to (MySQL and MS SQL). The MS SQL database is used for Accpac accounting storage, and the MySQL database is used for everything else. In one of the MySQL databases I have all of the dsn's listed which I use to create the string I need to connect to the MS SQL databases. I have a new program I am trying to write which I am taking employee data from one of the MySQL databases, and using the employee ID to get the employee's information from the MS SQL database. For some reason, whenever I run the program it will get through about 1200 records (out of 11k) and then crash with an error like the following, Fatal error: Call to a member function closeCursor() on a non-object I have tried moving the loops around in many different ways, and I have tried manually closing the connections by setting the database handle to null. Nothing I do seems to work. Thanks for any help! Metropolis

    Read the article

  • Rollback doesn't work in MySQLdb

    - by Anton Barycheuski
    I have next code ... db = MySQLdb.connect(host=host, user=user, passwd=passwd, db=db, charset='utf8', use_unicode=True) db.autocommit(False) cursor = db.cursor() ... for col in ws.columns[1:]: data = (col[NUM_ROW_GENERATION].value, 1, type_topliv_dict[col[NUM_ROW_FUEL].value]) fullgeneration_id = data[0] type_topliv = data[2] if data in completions_set: compl_id = completions_dict[data] else: ... sql = u"INSERT INTO completions (type, mark, model, car_id, type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period_url) VALUES (1, '%s', '%s', 0, %s, %s, %s, %s, '%s', '%s', '%s', '%s')" % (marks_dict[mark_id], models_dict[model_id], type_topliv, fullgeneration_id, mark_id, model_id, production_period, year_from, year_to, production_period.replace(' ', '_').replace(u'?.?.', 'nv') ) inserted_completion += cursor.execute(sql) cursor.execute("SELECT fullgeneration_id, type, type_topliv, id FROM completions where fullgeneration_id = %s AND type_topliv = %s" % (fullgeneration_id, type_topliv)) row = cursor.fetchone() compl_id = row[3] if is_first_car: deleted_compl_rus = cursor.execute("delete from compl_rus where compl_id = %s" % compl_id) for param, row_id in params: sql = u"INSERT INTO compl_rus (compl_id, modification, groupparam, param, paramvalue) VALUES (%s, '%s', '%s', '%s', %s)" % (compl_id, col[NUM_ROW_MODIFICATION].value, param[0], param[1], col[row_id].value) inserted_compl_rus += cursor.execute(sql) is_first_car = False db.rollback() print '\nSTATISTICS:' print 'Inserted completion:', inserted_completion print 'Inserted compl_rus:', inserted_compl_rus print 'Deleted compl_rus:', deleted_compl_rus ans = raw_input('Commit changes? (y/n)') db.close() I has manually deleted records from table and than run script two times. See https://dpaste.de/MwMa . I think, that rollback in my code doesn't work. Why?

    Read the article

  • What are the Limitations for Connecting to an Access Query in Excel

    - by thornomad
    I have an Access 2007 database that has a number of tables, some are fairly large (100,000+ records); I have created a union query to pull some of the same types of data from multiple tables into one large query for pivot table manipulation and reporting. For example: SELECT Language FROM Table1 UNION ALL SELECT Language FROM Table2 UNION ALL SELECT Language FROM Table3; This works. I found, quickly, however, that a union query will not show up when connecting to the datasource from Excel 2007. So, I created a second query to reference the union query. Like so: SELECT * FROM [The Above Union Query]; This query works and it, initially, was accessible from Excel. Time passed, I've added more data. Suddenly, when I connect to my Access database from Excel my query referencing the union has disappeared. MS Access shows no signs of an issue (data displays in Access) and my other non-union queries are showing up in Excel 2007 ... but not the one that references the union. What could be going on? Why did it disappear? I noticed if I switch some of the referenced tables in the union query to a smaller table (with less rows) all of sudden the query appears in Excel again. At least, I think that's what the difference is. I really can't put my finger on why some of the union queries won't show up and some will. Am stumped and need some guidance. Thanks.

    Read the article

  • C# threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >