Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 772/1330 | < Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >

  • Update SQL Server 2000 to SQL Server 2008: Benefits please?

    - by Ciaran Archer
    Hi there I'm looking for the benefits of upgrading from SQL Server 2000 to 2008. I was wondering: What database features can we leverage with 2008 that we can't now? What new TSQL features can we look forward to using? What performance benefits can we expect to see? What else will make management go for it? And the converse: What problems can we expect to encounter? What other problems have people found when migrating? Why fix something that isn't (technically) broken? We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however. Some background: Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation. Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade? Any and all contributions are appreciated!

    Read the article

  • SQLDeveloper using over 100MB of PGA+UGA

    - by Leigh Riffel
    Perhaps this is normal, but in my Oracle 11g database I am seeing programmers using Oracle's SQL Developer regularly consume more than 100MB of combined UGA and PGA memory. I'd like to know if this is normal and what can be done about it. Our database is on the 32 bit version of Windows 2008, so memory limitations are becoming an increasing concern. I am using the following query to show the memory usage: SELECT e.SID, e.username, e.status, b.PGA_MEMORY FROM v$session e LEFT JOIN (select y.SID, y.value pga, TO_CHAR(ROUND(y.value/1024/1024),99999999) || ' MB' PGA_MEMORY from v$sesstat y, v$statname z where y.STATISTIC# = z.STATISTIC# and NAME = 'session pga memory') b ON e.sid=b.sid WHERE (PGA)/1024/1024 > 20 ORDER BY 4 DESC; It seems that the resource usage goes up any time a table is opened in SQLDeveloper, but even when it is closed the memory does not go away. The problem is worse if the table is sorted while it was open as that seems to use even more memory. I understand how this would use memory while it is sorting, and perhaps even while it is still open, but to use memory after it is closed seems wrong to me. Can anyone confirm this? Update: I discovered that my numbers were off due to not understanding that the UGA is stored in the PGA under dedicated server mode. This makes the numbers lower than they were, but the problem still remains that SQL Developer seems to use excessive PGA.

    Read the article

  • How should I organize complex SQL views in Rails?

    - by Benjamin Oakes
    I manage a research database with Ruby on Rails. The data that is entered is primarily used by scientists who prefer to have all the relevant information for a study in one single massive table for use in their statistics software of choice. I'm currently presenting it as CSV, as it's very straightforward to do and compatible with the tools people want to use. I've written many views (the SQL kind, not the Rails HTML/ERB kind) to make the output they expect a reality. Some of these views are quite large and have a fair amount of complexity behind them. I wrote them in SQL because there are many calculations and comparisons that are more easily done with SQL. They're currently loaded into the database straight from a file named views.sql. To get the requested data, I do a select * from my_view;. The views.sql file is getting quite large. Part of the problem is that we're still figuring out what the data we collect means, so there's a lot of changes being made to the views all the time -- and a ton of them are being created. Many of them need to be repeatable. I've recently run into issues organizing and testing these views. Rails works great for user interface stuff and business logic, but I'm not aware of much existing structure for handling the reporting we require. Some options I've thought of: Should I move them into the most relevant models somehow? Several of the views interact with each other, which makes this situation more complex than just doing a single find_by_sql, so I don't know if they should only be part of the model. Perhaps they should be treated as a "view" in the MVC sense? (That is, they could be moved into app/views/ and live alongside the HTML, perhaps as files named something like my_view.csv.sql which return CSV.) How would you deal with a complex reporting problem like this?

    Read the article

  • How to use Nhibernate Validator + NHib component + ddl

    - by mynkow
    I just configured my NHibValidator. My NHibernate creates the DB schema. When I set MaxLenght="20" to some property of a class then in the database the length appears in the database column. I am doing this in the NHibValidator xml file. But the problem is that I have components and cannot figure out how to achieve this behaviour. The component is configured correctly in the Customer.hbm.xml file. EDIT: Well, I found that Hibernate Validator users had the same problem two years ago. http://opensource.atlassian.com/projects/hibernate/browse/HV-25 Is this an issue for NHibernate Validator or it is fixed. If it is working tell me how please. ----------------------------------------------------- public class Customer { public virtual string Name{get;set;} public virtual Contact Contacts{ get; } } ----------------------------------------------------- public class Contact { public virtual string Address{get;set;} } ----------------------------------------------------- <?xml version="1.0" encoding="utf-8" ?> <nhv-mapping xmlns="urn:nhibernate-validator-1.0" namespace="MyNamespace" assembly="MyAssembly"> <class name="Customer"> <property name="Name"> <length max="20"/> </property> <property name="Contacts"> <notNull/> <valid/> </property> </class> </nhv-mapping> ----------------------------------------------------- <?xml version="1.0" encoding="utf-8" ?> <nhv-mapping xmlns="urn:nhibernate-validator-1.0" namespace="MyNamespace" assembly="MyAssembly"> <class name="Contact"> <property name="Address"> <length max="50"/> <valid/> </property> </class> </nhv-mapping> -----------------------------------------------------

    Read the article

  • is using private shared objects/variables on class level harmful ?

    - by haansi
    Hello, Thanks for your attention and time. I need your opinion on an basic architectural issue please. In page behind classes I am using a private and shared object and variables (list or just client or simplay int id) to temporary hold data coming from database or class library. This object is used temporarily to catch data and than to return, pass to some function or binding a control. 1st: Can this approach harm any way ? I couldn't analyze it but a thought was using such shared variables may replace data in it when multiple users may be sending request at a time? 2nd: Please comment also on using such variables in BLL (to hold data coming from DAL/database). In this example every time new object of BLL class will be made. Here is sample code: public class ClientManager { Client objclient = new Client(); //Used in 1st and 2nd method List<Client> clientlist = new List<Client>();// used in 3rd and 4th method ClientRepository objclientRep = new ClientRepository(); public List<Client> GetClients() { return clientlist = objclientRep.GetClients(); } public List<Client> SearchClients(string Keyword) { return clientlist = objclientRep.SearchClients(Keyword); } public Client GetaClient(int ClientId) { return objclient = objclientRep.GetaClient(ClientId); } public Client GetClientDetailForConfirmOrder(int UserId) { return objclientRep.GetClientDetailForConfirmOrder(UserId); } } I am really thankful to you for sparing time and paying kind attention.

    Read the article

  • Using javascript and php together

    - by EmmyS
    I have a PHP form that needs some very simple validation on submit. I'd rather do the validation client-side, as there's quite a bit of server-side validation that happens to deal with writing form values to a database. So I just want to call a javascript function onsubmit to compare values in two password fields. This is what I've got: function validate(form){ var password = form.password.value; var password2 = form.password2.value; alert("password:"+password+" password2:" + password2); if (password != password2) { alert("not equal"); document.getElementByID("passwordError").style.display="inline"; return false; } alert("equal"); return true; } The idea being that a default-hidden div containing an error message would be displayed if the two passwords don't match. The alerts are just to display the values of password and password2, and then again to indicate whether they match or not (will not be used in production code). I'm using an input type=submit button, and calling the function in the form tag: <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" onsubmit="return validate(this);"> Everything is alerting as expected when entering non-matching values. I would have hoped (and assumed, based on past use) that if the function returned false, the actual submit would not occur. And yet, it is. I'm testing by entering non-matching values in the password fields, and the alerts clearly show me the values and the not equal result, but the actual form action is still occurring and it's trying to write to my database. I'm pretty new at PHP; is there something about it that will not let me combine with javascript this way? Would it be better to use an input type=button and include submit() in the function itself if it returns true?

    Read the article

  • IIS6, ASP.NET MVC 1 and random slowdowns

    - by Mr Snuffle
    I've recently deployed a MVC application to an IIS6 web server. One strange behaviour I've been having is the load times will randomly blow up to 30sec+ and then return to normal. Our tests have shown this occurring on multiple connections at the same time. Once the wait has passed, the site become responsive again. It's completely random when this will occur, but will probably happen about once every 15 minutes or so. My first thought was the application was being restarted by the web server for some reason, but I determined this wasn't the case because the process recycling is set very infrequently, and I placed some logging in the application startup. It's also nothing to do with the database connection. This slowdown happens simply by moving between static pages too. I've watched the database with a SQL profiler, and nothing is hitting it when these slowdowns occur. Finally, I've placed entry and exit logging on my controller actions, the slowdown always happens outside of the controller. The entry and exit time for a controller action is always appropriately fast. Does anyone have any ideas of what could be causing this? I've tried running it locally on IIS7 and I haven't had the issue. I can only think it's something to do with our hosting provider.

    Read the article

  • Pre-populate iPhone Safari SQLite DB

    - by Matt Rogish
    I'm working with a PhoneGap app that uses Safari local storage (SQlite DB) via Javascript: http://developer.apple.com/safari/library/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/UsingtheJavascriptDatabase/UsingtheJavascriptDatabase.html On first load, the app creates the database, tables, and populates the data via a series of INSERT statements. If the user closes the app while this processing is happening, then my app database is left in an inconsistent state. What I prefer to do is deploy the SQLite DB as part of my iTunes App packaging so nothing must be populated at app cold start. However, I'm not sure if that is possible -- all of the google hits for this topic that I can find are referring to the core-data provided SQLite which is not what we're using... If it's not possible, could I wrap the entire thing in a transaction and keep re-trying it when the app is restarted? Failing that, I guess I can create a simple table with one boolean column "is_app_db_loaded?" and set it to true after I've processed all my inserts. But that's really gross... Ideas? Thanks!!

    Read the article

  • Is it possible to definitively identify whether a DML command was issued from a stored procedure?

    - by Ed Harper
    I have inherited a SQL Server 2008 database to which calling applications have access through stored procedures. Each table in the database has a shadow audit table into which Insert/Update/Delete operations for are logged. Performance testing on populating the audit tables showed that inserting the audit records using OUTPUT clauses was 20% or so faster than using triggers, so this has been implemented in the stored procedures. However, because this design cannot track changes made directly to the tables through DML statements issued directly against the tables, triggers have also been implemented which use the value of @@NESTLEVEL to determine whether or not to run the trigger (the assumption being that all DML run through stored procedures will have @@NESTLEVEL 1). i.e. the body of the trigger code looks something like: IF @@NESTLEVEL = 1 -- implies call is direct sql so generate history from here BEGIN ... insert into audit table This design is flawed because it won't track updates where DML statements are executed in dynamic SQL, or any other context where @@NESTLEVEL is raised above 1. Can anyone suggest a completely reliable method we can use in the triggers to execute them only if not triggered by a stored procedure? Or is this (as I suspect) not possible?

    Read the article

  • PHP returns invalid MySQL resource

    - by DeadMG
    $LDATE = '#' . $_REQUEST['LDateDay'] . '/' . $_REQUEST['LDateMonth'] . '/' . $_REQUEST['LDateYear'] . '#'; $RDATE = '#' . $_REQUEST['RDateDay'] . '/' . $_REQUEST['RDateMonth'] . '/' . $_REQUEST['RDateYear'] . '#'; include("../../sql.php"); $myconn2 = mysql_connect(/*removed*/, $username, $password); mysql_select_db(/*removed*/, $myconn2); $LSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$LDATE; $LFlights = mysql_query($LSQLRequest, $myconn2); $RSQLRequest = "SELECT * FROM flight WHERE DepartureDate = ".$RDATE; $RFlights = mysql_query($RSQLRequest, $myconn2); Assuming that all the $_REQUESTs are valid numerical values for their appropriate fields in the day/month/year field, how can LFlights and RFlights be invalid? When I polled the whole database I got hundreds of results so I know that the database and connection data is fine, and the field DepartureDate exists too.

    Read the article

  • C++ - passing references to boost::shared_ptr

    - by abigagli
    If I have a function that needs to work with a shared_ptr, wouldn't it be more efficient to pass it a reference to it (so to avoid copying the shared_ptr object)? What are the possible bad side effects? I envision two possible cases: 1) inside the function a copy is made of the argument, like in ClassA::take_copy_of_sp(boost::shared_ptr<foo> &sp) { ... m_sp_member=sp; //This will copy the object, incrementing refcount ... } 2) inside the function the argument is only used, like in Class::only_work_with_sp(boost::shared_ptr<foo> &sp) //Again, no copy here { ... sp->do_something(); ... } I can't see in both cases a good reason to pass the boost::shared_ptr by value instead of by reference. Passing by value would only "temporarily" increment the reference count due to the copying, and then decrement it when exiting the function scope. Am I overlooking something? Andrea. EDIT: Just to clarify, after reading several answers : I perfectly agree on the premature-optimization concerns, and I alwasy try to first-profile-then-work-on-the-hotspots. My question was more from a purely technical code-point-of-view, if you know what I mean.

    Read the article

  • Corrupt UTF-8 Characters with PHP 5.2.10 and MySQL 5.0.81

    - by jkndrkn
    We have an application hosted on both a local development server and a live site. We are experiencing UTF-8 corruption issues and are looking to figure out how to resolve them. The system is run using symfony 1.0 with Propel. On our development server, we are running PHP 5.2.0 and MySQL 5.0.32. We do not experience corrupted UTF-8 characters there. On our live site, PHP 5.2.10 and MySQL 5.0.81 is running. On that server, certain characters such as ô´ and S are corrupted once they are stored in the database. The corrupted characters are showing up as either question marks or approximations of the original character with adjacent question marks. Examples of corruption: Uncorrupted: ô´ Corrupted: ô? Uncorrupted: S Corrupted: ? We are currently using the following techniques on both development and live servers: Executing the following queries prior to execution of any other queries: SET NAMES 'utf8' COLLATE 'utf8_unicode_ci' SET CHARSET 'utf8' Setting the <meta> Content-Type value to: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Adding the following to our .htaccess file: AddDefaultCharset utf-8 Using mb_* (multibyte) PHP functions where necessary. Being sure to set database columns to use utf8_unicode_ci collation. These techniques are sufficient for our development site, but do not work on the live site. On the live site I've also tried adding mysql_set_encoding('ut8', $mysql_connection) but this does not help either. I have found some evidence that newer versions of PHP and MySQL are mishandling UTF-8 character encodings.

    Read the article

  • Classic ASP application-wide initializations and object caching

    - by slack3r
    In classic ASP (which I am forced to use), I have a few factory functions, that is, functions that return classes. I use JScript. In one include file I use these factory functions to create some classes that are used throughout the application. This include file is included with the #include directive in all pages. These factory functions do some "heavy lifting" and I don't want them to be executed on every page load. So, to make this clear I have something like this: // factory.inc function make_class(arg1, arg2) { function klass() { //... } // ... Some heavy stuff return klass; } // init.inc, included everywhere <!-- #include FILE="factory.inc" --> // ... MyClass1 = make_class(myarg01, myarg02); MyClass2 = make_class(myarg11, myarg12); //... How can I achieve the same effect without calling make_class on every page load? I know that I can't cache the classes in the Application object I can't use the Application_OnStart hook in Global.asa I could probably create a scripting component, but I really don't want to do that So, is there something else I can do? Maybe some way to achieve caching of these classes, which are really objects in JScript. PS: [further clarification] In the above code "heavy stuff" is not so heavy, but I just want to know if there's a way to avoid it being executed all the time. It reads database meta information, builds a table of the primary keys in the database and another table that resolves strings to classes, etc.

    Read the article

  • Problems updating a textBox ASP.NET

    - by Roger Filipe
    Hello, I'm starting in asp.net and am having some problems that I do not understand. The problem is this, I am building a site for news. Every news has a title and body. I have a page where I can insert news, this page uses a textbox for each of the fields (title and body), after clicking the submit button everything goes ok and saves the values in the database. And o have another page where I can read the news, I use labels for each of the camps, these labels are defined in the Page_Load. Now I'm having problems on the page where I can edit the news. I am loading two textboxes (title and body) in the Page_Load, so far so good, but then when I change the text and I click the submit button, it ignores the changes that I made in the text and saves the text loaded in Page_Load. This code doesn't show any database connection but you can understand what i'm talking about. protected void Page_Load(object sender, EventArgs e) { textboxTitle.Text = "This is the title of the news"; textboxBody.Text = "This is the body of the news "; } I load the page, make the changes in the text , and then click submit. protected void btnSubmit_Click(object sender, EventArgs e) { String title = textboxTitle.Text; String body = textboxBody.Text; Response.Write("Title: " + title + " || "); Response.Write("Body: " + body ); } Nothing happens, the text in the textboxes is always the one I loaded in the page_load, how do I update the Text in the textboxes?

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • How to determine one to many column name from entity type

    - by snicker
    I need a way to determine the name of the column used by NHibernate to join one-to-many collections from the collected entity's type. I need to be able to determine this at runtime. Here is an example: I have some entities: namespace Entities { public class Stable { public virtual int Id {get; set;} public virtual string StableName {get; set;} public virtual IList<Pony> Ponies { get; set; } } public class Dude { public virtual int Id { get; set; } public virtual string DudesName { get; set; } public virtual IList<Pony> PoniesThatBelongToDude { get; set; } } public class Pony { public virtual int Id {get; set;} public virtual string Name {get; set;} public virtual string Color { get; set; } } } I am using NHibernate to generate the database schema, which comes out looking like this: create table "Stable" (Id integer, StableName TEXT, primary key (Id)) create table "Dude" (Id integer, DudesName TEXT, primary key (Id)) create table "Pony" (Id integer, Name TEXT, Color TEXT, Stable_id INTEGER, Dude_id INTEGER, primary key (Id)) Given that I have a Pony entity in my code, I need to be able to find out: A. Does Pony even belong to a collection in the mapping? B. If it does, what are the column names in the database table that pertain to collections In the above instance, I would like to see that Pony has two collection columns, Stable_id and Dude_id.

    Read the article

  • LinQ XML mapping to a generic type

    - by Manuel Navarro
    I´m trying to use an external XML file to map the output from a stored procedure into an instance of a class. The problem is that my class is of a generic type: public class MyValue<T> { public T Value { get; set; } } Searching through a lot of blogs an articles I've managed to get this: <?xml version="1.0" encoding="utf-8" ?> <Database Name="" xmlns="http://schemas.microsoft.com/linqtosql/mapping/2007"> <Table Name="MyValue" Member="MyNamespace.MyValue`1" > <Type Name="MyNamespace.MyValue`1"> <Column Name="Category" Member="Value" DbType="VarChar(100)" /> </Type> </Table> <Function Method="GetResourceCategories" Name="myprefix_GetResourceCategories" > <ElementType Name="MyNamespace.MyValue`1"/> </Function> </Database> The MyNamespace.MyValue`1 trick works fine, and the class is recognized. I expect four rows from the stored procedure, and I'm getting four MyValue<string> instances, but the big problem is that the property Value for the all four instances is null. The property is not getting mapped and I don't really get why. Maybe worth noting that the property Value is generic, and that when the mapping is done using attributes it works perfect. Anyone have a clue? BTW the method GetResourceCategories: public ISingleResult<MyValue<string>> GetResourceCategories() { IExecuteResult result = this.ExecuteMethodCall( this, (MethodInfo)MethodInfo.GetCurrentMethod()); return (ISingleResult<MyValue<string>>)result.ReturnValue; }

    Read the article

  • Alter Dilemma : How to use to set Primary and other attributes.

    - by Rachel
    I have following table in database AND I need to alter it to below mentioned schema. Initially I was drop the current database and creating new one using the create but I am not supposed to do that and use ALTER but am not sure as to how can I use ALTER to add primary key and other constraints. Any Suggestions !!! Code Current: CREATE TABLE `details` ( `KEY` varchar(255) NOT NULL, `ID` bigint(20) NOT NULL, `CODE` varchar(255) NOT NULL, `C_ID` bigint(20) NOT NULL, `C_CODE` varchar(64) NOT NULL, `CCODE` varchar(255) NOT NULL, `TCODE` varchar(255) NOT NULL, `LCODE` varchar(255) NOT NULL, `CAMCODE` varchar(255) NOT NULL, `OFCODE` varchar(255) NOT NULL, `OFNAME` varchar(255) NOT NULL, `PRIORITY` bigint(20) NOT NULL, `STDATE` datetime NOT NULL, `ENDATE` datetime NOT NULL, `INT` varchar(255) NOT NULL, `PHONE` varchar(255) NOT NULL, `TV` varchar(255) NOT NULL, `MTV` varchar(255) NOT NULL, `TYPE` varchar(255) NOT NULL, `CREATED` datetime NOT NULL, `MAIN` varchar(255) NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Desired: CREATE TABLE `details` ( `id` bigint(20) NOT NULL, `code` varchar(255) NOT NULL, `cid` bigint(20) NOT NULL, `ccode` varchar(64) NOT NULL, `c_code` varchar(255) NOT NULL, `tcode` varchar(255) NOT NULL, `lcode` varchar(255) NOT NULL, `camcode` varchar(255) NOT NULL, `ofcode` varchar(255) NOT NULL, `ofname` varchar(255) NOT NULL, `priority` bigint(20) NOT NULL, `stdate` datetime NOT NULL, `enddate` datetime NOT NULL, `list` varchar(255) NOT NULL, `name` varchar(255) NOT NULL, `created` datetime NOT NULL, `date` datetime NOT NULL, `ofshn` int(20) NOT NULL, `ofcl` int(20) NOT NULL, `ofr` int(20) NOT NULL, PRIMARY KEY (`code`,`ccode`,`list`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Thanks !!!

    Read the article

  • From Sinatra Base object. Get port of application including the base object

    - by Poul
    I have a Sinatra::Base object that I would like to include in all of my web apps. In that base class I have the configure method which is called on start-up. I would like that configure code to 'register' that service with a centralized database. The information that needs to be sent when registering is the information on how to contact this web-service... things like host and port. I then plan on having a monitoring service that will spin over all registered services and occasionally ping them to make sure they are still up and running. In the configure method I am having trouble getting the port information. The 'self.settings.port' variable doesn't seem to work in this method. a) any ideas on how to get the port? I have the host. b) is there a sinatra plug-in that already does something like this so I don't have to write it myself? :-) //in my Sinatra::Base code. lets call it register_me.rb RegisterMe < Sinatra::Base configure do //save host and port information to database end get '/check_status' //return status end //in my web service code require register_me //at this point, sinatra will initialize the RegisterMe object and call configure post ('/blah') //example of a method for this particular web service end

    Read the article

  • Barcode to Product Name Converter

    - by spagetticode
    Hi All, We are designing a mobile shopping system. The camera on the phone will read the barcode and then we have to convert the barcode to a standard product name in order to save it to our database. We are saving it to our database because we are connecting to a web service of local e-commerce sites to get their price about the related product. We are sending the product name to get the price from them, so that the user can see the prices, compare and buy. We cannot send barcode number to get the data from the e-commerce sites because some sites do not have the info of the barcode number. I have to somehow get the product name by only knowing the barcode. Google returns the result when barcode number is searched. But how am I going to parse the data? or how am I going to know which answer of google search best suits my input? Is there a site that sells barcode and product name data match? We are designing the system with C# Thanks alot.

    Read the article

  • updating multiple nodes in xml with xquery and xdmp:node-replace

    - by morja
    Hi all, I wnat to update an XML document in my xml database (Marklogic). I have xml as input and want to replace each node that exists in the target xml. If a node does not exist it would be great if it gets added, but thats maybe another task. My XML in the database: <user> <username>username</username> <firstname>firstname</firstname> <lastname>lastname</lastname> <email>[email protected]</email> <comment>comment</comment> </user> The value of $user_xml: <user> <firstname>new firstname</firstname> <lastname>new lastname</lastname> </user> My function so far: declare function update-user ( $username as xs:string, $user_xml as node()) as empty-sequence() { let $uri := user-uri($username) return for $node in $user_xml/user return xdmp:node-replace(fn:doc($uri)/user/fn:node-name($node), $node) }; First of all I cannot iterate over $user_xml/user. If I try to iterate over $user_xml I get "arg1 is not of type node()" exception. But maybe its the wrong approach anyway? Does anybody maybe have sample code how to do this?

    Read the article

  • PHP Exceptions in Classes

    - by mike condiff
    I'm writing a web application (PHP) for my friend and have decided to use my limited OOP training from Java. My question is what is the best way to note in my class/application that specific critical things failed without actually breaking my page. Currently my problem is I have an Object "SummerCamper" which takes a camper_id as it's argument to load all of the necessary data into the object from the database. Say someone specifies a camper_id in the querystring that does not exist, I pass it to my objects constructor and the load fails. Currently I don't see a way for me to just return false from the constructor. I have read I could possibly do this with Exceptions, throwing an exception if no records are found in the database or if some sort of validation fails on input of the camper_id from the application etc. However, I have not really found a great way to alert my program that the Object Load has failed. I tried returning false from within the CATCH but the Object still persists in my php page. I do understand I could put a variable $is_valid = false if the load fails and then check the Object using a get method but I think there may be better ways. What is the best way of achieving the essential termination of an object if a load fails? Should I load data into the object from outside the constructor? Is there some osrt of design pattern that I should look into? Any help would be appreciated.

    Read the article

  • C# web service, MySql encoding problem

    - by Boban
    I use C# web service to insert, delete and get data from MySql database. The problem is some of the data is in Macedonian (Cyrilic). When I insert directly in the database, it inserts ok. For example: "???" is "???". When I insert throgh the service, it's not. For example: "???" is "???". When I try to get data throug the service, it gets it ok. What's the problem with the inserting? Here is part of my code for inserting: MySqlConnection connection = new MySqlConnection(MyConString); MySqlCommand command = connection.CreateCommand(); command.CommandText = "INSERT INTO user (id_user, name VALUES (NULL, ?name);"; command.Parameters.Add("?name", MySqlDbType.VarChar).Value = name; connection.Open(); command.ExecuteReader(); connection.Close(); return thisrow; Tnq U in advance!!!

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • how i can retrive files from folder on hard-disk and how to display uplaoded file data into a textar

    - by Deepak Narwal
    I have made a application form in which i am asking for username,password,email id and user's resume.Now after uploading resume i am storing it into hard disk into htdocs/uploadedfiles/..in a format something like this username_filename.In database i am storing file name,file size,file type.Some coading for this i am showing here $filesize=$_FILES['file']['size']; $filename=$_FILES['file']['name']; $filetype=$_FILES['file']['type']; $temp_name=$_FILES['file']['tmp_name']; //temporary name of uploaded file $pwd_hash = hash('sha1',$_POST['password']); $target_path = "uploadedfiles/"; $target_path = $target_path.$_POST['username']."_".basename( $_FILES['file']['name']); move_uploaded_file($_FILES['file']['tmp_name'], $target_path) ; $sql="insert into employee values ('NULL','{$_POST[username]}','{$pwd_hash}','{$filename}','{$filetype}','$filesize',NOW())"; Now i have two questions 1.NOw how i can display this file data into a textarea(something like naukri.com resume section) 2.How one can retrive that resume file from folder on hard-disk.What query should i write to fetch this file from that folder.I know how to retrive data from database but i dont know how to retrive data from a folder in hard-disk like in the case if user want to delete this file or he wnat to download this file.How i can do this

    Read the article

< Previous Page | 768 769 770 771 772 773 774 775 776 777 778 779  | Next Page >