Search Results

Search found 36756 results on 1471 pages for 'mysql real query'.

Page 604/1471 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • Filtering by entity key name in Google App Engine on Python

    - by Bemmu
    On Google App Engine to query the data store with Python, one can use GQL or Entity.all() and then filter it. So for example these are equivalent gql = "SELECT * FROM User WHERE age >= 18" db.GqlQuery(gql) and query = User.all() query.filter("age >=", 18) Now, it's also possible to query things by key name. I know that in GQL you do it like this gql = "SELECT * FROM User WHERE __key__ >= Key('User', 'abc')" db.GqlQuery(gql) But how would you now use filter to do the same? query = User.all() query.filter("__key__ >=", ?????)

    Read the article

  • OS Analytics with Oracle Enterprise Manager (by Eran Steiner)

    - by Zeynep Koch
    Oracle Enterprise Manager Ops Center provides a feature called "OS Analytics". This feature allows you to get a better understanding of how the Operating System is being utilized. You can research the historical usage as well as real time data. This post will show how you can benefit from OS Analytics and how it works behind the scenes. The recording of our call to discuss this blog is available here: https://oracleconferencing.webex.com/oracleconferencing/ldr.php?AT=pb&SP=MC&rID=71517797&rKey=4ec9d4a3508564b3Download the presentation here See also: Blog about Alert Monitoring and Problem Notification Blog about Using Operational Profiles to Install Packages and other content Here is quick summary of what you can do with OS Analytics in Ops Center: View historical charts and real time value of CPU, memory, network and disk utilization Find the top CPU and Memory processes in real time or at a certain historical day Determine proper monitoring thresholds based on historical data Drill down into a process details Where to start To start with OS Analytics, choose the OS asset in the tree and click the Analytics tab. You can see the CPU utilization, Memory utilization and Network utilization, along with the current real time top 5 processes in each category (click the image to see a larger version):  In the above screen, you can click each of the top 5 processes to see a more detailed view of that process. Here is an example of one of the processes: One of the cool things is that you can see the process tree for this process along with some port binding and open file descriptors. Next, click the "Processes" tab to see real time information of all the processes on the machine: An interesting column is the "Target" column. If you configured Ops Center to work with Enterprise Manager Cloud Control, then the two products will talk to each other and Ops Center will display the correlated target from Cloud Control in this table. If you are only using Ops Center - this column will remain empty. The "Threshold" tab is particularly helpful - you can view historical trends of different monitored values and based on the graph - determine what the monitoring values should be: You can ask Ops Center to suggest monitoring levels based on the historical values or you can set your own. The different colors in the graph represent the current set levels: Red for critical, Yellow for warning and Blue for Information, allowing you to quickly see how they're positioned against real data. It's important to note that when looking at longer periods, Ops Center smooths out the data and uses averages. So when looking at values such as CPU Usage, try shorter time frames which are more detailed, such as one hour or one day. Applying new monitoring values When first applying new values to monitored attributes - a popup will come up asking if it's OK to get you out of the current Monitoring Policy. This is OK if you want to either have custom monitoring for a specific machine, or if you want to use this current machine as a "Gold image" and extract a Monitoring Policy from it. You can later apply the new Monitoring Policy to other machines and also set it as a default Monitoring Profile. Once you're done with applying the different monitoring values, you can review and change them in the "Monitoring" tab. You can also click the "Extract a Monitoring Policy" in the actions pane on the right to save all the new values to a new Monitoring Policy, which can then be found under "Plan Management" -> "Monitoring Policies". Visiting the past Under the "History" tab you can "go back in time". This is very helpful when you know that a machine was busy a few hours ago (perhaps in the middle of the night?), but you were not around to take a look at it in real time. Here's a view into yesterday's data on one of the machines: You can see an interesting CPU spike happening at around 3:30 am along with some memory use. In the bottom table you can see the top 5 CPU and Memory consumers at the requested time. Very quickly you can see that this spike is related to the Solaris 11 IPS repository synchronization process using the "pkgrecv" command. The "time machine" doesn't stop here - you can also view historical data to determine which of the zones was the busiest at a given time: Under the hood The data collected is stored on each of the agents under /var/opt/sun/xvm/analytics/historical/ An "os.zip" file exists for the main OS. Inside you will find many small text files, named after the Epoch time stamp in which they were taken If you have any zones, there will be a file called "guests.zip" containing the same small files for all the zones, as well as a folder with the name of the zone along with "os.zip" in it If this is the Enterprise Controller or the Proxy Controller, you will have folders called "proxy" and "sat" in which you will find the "os.zip" for that controller The actual script collecting the data can be viewed for debugging purposes as well: On Linux, the location is: /opt/sun/xvmoc/private/os_analytics/collect If you would like to redirect all the standard error into a file for debugging, touch the following file and the output will go into it: # touch /tmp/.collect.stderr   The temporary data is collected under /var/opt/sun/xvm/analytics/.collectdb until it is zipped. If you would like to review the properties for the Analytics, you can view those per each agent in /opt/sun/n1gc/lib/XVM.properties. Find the section "Analytics configurable properties for OS and VSC" to view the Analytics specific values. I hope you find this helpful! Please post questions in the comments below. Eran Steiner

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Using Static methods or none static methods in Dao Class ?

    - by dankyy1
    Hi I generate Dao classes for some DB operations in this manner making methods of Dao class as static or none static is better? Using sample dao class below ,If more than one client got to use the AddSampleItem method in same time?how this may result? public class SampleDao { static DataAcessor dataAcessor public static void AddSampleItem(object[] params) { dataAcessor =new DataAcessor(); //generate query here string query="..." dataAcessor.ExecuteQery(query); dataAcessor.Close(); } public static void UpdateSampleItem(object[] params) { dataAcessor =new DataAcessor(); //generate query here string query="..." dataAcessor.ExecuteQery(query); dataAcessor.Close(); } }

    Read the article

  • PHP - static DB class vs DB singleton object

    - by Marco Demaio
    I don't want to create a discussion about singleton better than static or better than global, etc. I read dozens of questions about it on SO, but I couldn't come up with an answer to this SPECIFIC question, so I hope someone could now illuminate me buy answering this question with one (or more) real simple EXAMPLES, and not theoretical discussions. In my app I have the typical DB class needed to perform tasks on DB without having to write everywhere in code mysql_connect/mysql_select_db/mysql... (moreover in future I might decide to use another type of DB engine in place of mySQL so obviously I need a class of abstration). I could write the class either as a static class: class DB { private static $connection = FALSE; //connection to be opened //DB connection values private static $server = NULL; private static $usr = NULL; private static $psw = NULL; private static $name = NULL; public static function init($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection } public static function query($query_string) { //performs query over alerady opened connection, if not open, it opens connection 1st } ... } or as a Singletonm class: class DBSingleton { private $inst = NULL; private $connection = FALSE; //connection to be opened //DB connection values private $server = NULL; private $usr = NULL; private $psw = NULL; private $name = NULL; public static function getInstance($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection if($inst === NULL) $this->inst = new DBSingleton(); return $this->inst; } private __construct()... public function query($query_string) { //performs query over already opened connection, if connection is not open, it opens connection 1st } ... } Then after in my app if I wanto to query the DB i could do //Performing query using static DB object DB:init(HOST, USR, PSW, DB_NAME); DB::query("SELECT..."); //Performing query using DB singleton $temp = DBSingleton::getInstance(HOST, USR, PSW, DB_NAME); $temp->query("SELECT..."); My simple brain sees Singleton has got the only advantage to avoid declaring as 'static' each method of the class. I'm sure some of you could give me an EXAMPLE of real advantage of singleton in this specific case. Thanks in advance.

    Read the article

  • Fortran intent(inout) v's no intent

    - by Andrew Walker
    Good practice dictates that subroutine arguments in Fortran should each have a specified intent (i.e. intent(in), intent(out) or intent(inout) as described this question): subroutine bar (a, b) real, intent(in) :: a real, intent(inout) :: b b = b + a ... However, not specifying an intent is valid Fortran: subroutine bar (a, b) real, intent(in) :: a real :: b b = b + a ... Are there any real differences beyond compile time checking for an argument specified as intent(inout) and an argument without a specified intent? Is there anything I should worry about if I'm retrofitting intents to older, intent free, code?

    Read the article

  • Print the jena result set in html(servlet/jsp).

    - by Udayanga
    hi, I'm using servlet for manipulating ontology.I got the result of my SPARQL query and I want to display(print) that result in JSP(Servlet). Following code segment can be used to print the result in console. . . . com.hp.hpl.jena.query.Query query = QueryFactory.create(queryStr); QueryExecution qe = QueryExecutionFactory.create(query,model); com.hp.hpl.jena.query.ResultSet rs = qe.execSelect(); ResultSetFormatter.out(System.out, rs); any idea..? Thank in advance!

    Read the article

  • imap_open() says "invalid remote specification" and fails to connect

    - by Kristopher Ives
    When I try to use imap_open I get the following error: Warning: imap_open() [function.imap-open]: Couldn't open stream {mail.domain.com:110/pop3/novalidate-cert/} in /path/to/mailbox.php on line 5 Can't open mailbox {mail.domain.com:110/pop3/novalidate-cert/}: invalid remote specification My phpinfo says that I have: IMAP c-Client Version 2007e SSL Support enabled Kerberos Support enabled On another server that gives the same phpinfo for imap it works, although that version is 2006. PHP says it was compiled with the following settings: './configure' '--disable-path-info-check' '--enable-exif' '--enable-fastcgi' '--enable-ftp' '--enable-gd-native-ttf' '--enable-libxml' '--enable-mbstring' '--enable-pdo=shared' '--enable-soap' '--enable-sockets' '--enable-zip' '--prefix=/usr' '--with-bz2' '--with-curl=/opt/curlssl/' '--with-freetype-dir=/usr' '--with-gd' '--with-gettext' '--with-imap=/opt/php_with_imap_client/' '--with-imap-ssl=/usr' '--with-jpeg-dir=/usr' '--with-kerberos' '--with-libexpat-dir=/usr' '--with-libxml-dir=/opt/xml2' '--with-libxml-dir=/opt/xml2/' '--with-mysql=/usr' '--with-mysql-sock=/var/lib/mysql/mysql.sock' '--with-mysqli=/usr/bin/mysql_config' '--with-openssl=/usr' '--with-openssl-dir=/usr' '--with-pdo-mysql=shared' '--with-pdo-sqlite=shared' '--with-pgsql=/usr' '--with-png-dir=/usr' '--with-sqlite=shared' '--with-ttf' '--with-xpm-dir=/usr' '--with-zlib' '--with-zlib-dir=/usr'

    Read the article

  • Compiled Linq Queries with Built-in SQL functions

    - by Brandi
    I have a query that I am executing in C# that is taking way too much time: string Query = "SELECT COUNT(HISTORYID) FROM HISTORY WHERE YEAR(CREATEDATE) = YEAR(GETDATE()) "; Query += "AND MONTH(CREATEDATE) = MONTH(GETDATE()) AND DAY(CREATEDATE) = DAY(GETDATE()) AND USERID = '" + EmployeeID + "' "; Query += "AND TYPE = '5'"; I then use SqlCommand Command = new SqlCommand(Query, Connection) and SqlDataReader Reader = Command.ExecuteReader() to read in the data. This is taking over a minute to execute from C#, but is much quicker in SSMS. I see from google searching you can do something with CompiledQuery, but I'm confused whether I can still use the built in SQL functions YEAR, MONTH, DAY, and GETDATE. If anyone can show me an example of how to create and call a compiled query using the built in functions, I will be very grateful! Thanks in advance.

    Read the article

  • Transforming a string to a valid PDO_MYSQL DSN

    - by Alix Axel
    What is the most concise way to transform a string in the following format: mysql:[/[/]][user[:pass]@]host[:port]/db[/] Into a usuable PDO connection/instance (using the PDO_MYSQL DSN), some possible examples: $conn = new PDO('mysql:host=host;dbname=db'); $conn = new PDO('mysql:host=host;port=3307;dbname=db'); $conn = new PDO('mysql:host=host;port=3307;dbname=db', 'user'); $conn = new PDO('mysql:host=host;port=3307;dbname=db', 'user', 'pass'); I've been trying some regular expressions (preg_[match|split|replace]) but they either don't work or are too complex, my gut tells me this is not the way to go but nothing else comes to my mind. Any suggestions?

    Read the article

  • print customized result in prolog

    - by Allan Jiang
    I am working on a simple prolog program. Here is my problem. Say I already have a fact fruit(apple). I want the program take a input like this ?- input([what,is,apple]). and output apple is a fruit and for input like ?-input([is,apple,a,fruit]) instead of default print true or false, I want the program print some better phrase like yes and no Can someone help me with this? My code part is below: input(Text) :- phrase(sentence(S), Text), perform(S). %... sentence(query(Q)) --> query(Q). query(Query) --> ['is', Thing, 'a', Category], { Query =.. [Category, Thing]}. % here it will print true/false, is there a way in prolog to have it print yes/no, %like in other language: if(q){write("yes")}else{write("no")} perform(query(Q)) :- Q.

    Read the article

  • Does the optimizer filter subqueries with outer where clauses

    - by Mongus Pong
    Take the following query: select * from ( select a, b from c UNION select a, b from d ) where a = 'mung' Will the optimizer generally work out that I am filtering a on the value 'mung' and consequently filter mung on each of the queries in the subquery. OR will it run each query within the subquery union and return the results to the outer query for filtering (as the query would perhaps suggest) In which case the following query would perform better : select * from ( select a, b from c where a = 'mung' UNION select a, b from d where a = 'mung' ) Obviously query 1 is best for maintenance, but is it sacrificing much performace for this? Which is best?

    Read the article

  • postgreSQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • postgres SQL - pg_class question

    - by Sachin Chourasiya
    PostgreSQL stores statistics about tables in the system table called pg_class. The query planner accesses this table for every query. These statistics may only be updated using the analyze command. If the analyze command is not run often, the statistics in this table may not be accurate and the query planner may make poor decisions which can degrade system performance. Another strategy is for the query planner to generate these statistics for each query (including selects, inserts, updates, and deletes). This approach would allow the query planner to have the most up-to-date statistics possible. Why postgres always rely on pg_class instead?

    Read the article

  • Is this a valid benefit of using embedded SQL over stored procedures?

    - by George
    Here's an argument for SPs that I haven't heard. Flamers, be gentle with the down tick, Since there is overhead associated with each trip to the database server, I would suggest that a POSSIBLE reason for placing your SQL in SPs over embedded code is that you are more insulated to change without taking a performance hit. For example. Let's say you need to perform Query A that returns a scalar integer. Then, later, the requirements change and you decide that it the results of the scalar is x that then, and only then, you need to perform another query. If you performed the first query in a SP, you could easily check the result of the first query and conditionally execute the 2nd SQL in the same SP. How would you do this efficiently in embedded SQL w/o perform a separate query or an unnecessary query? Here's an example: --This SP may return 1 or two queries. SELECT @CustCount = COUNT(*) FROM CUSTOMER IF @CustCount 10 SELECT * FROM PRODUCT Can this/what is the best way to do this in embedded SQL?

    Read the article

  • Doctrine MYSQL 150+ tables: generating models works, but not vice-versa?

    - by ropstah
    I can generate my models and schema.yml file based on an existing database. But when I try to do it the other way around using Doctrine::createTablesFromModels() i get an error: Syntax error or access violation: 1064 So either of these works: Doctrine::generateYamlFromDb(APPPATH . 'models/yaml'); Doctrine::generateYamlFromModels(APPPATH . 'models/yaml', APPPATH . 'models'); Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true)); Doctrine::generateModelsFromDb(APPATH . 'models', array('default'), array('generateTableClasses' => true)); But this fails (it drops/creates the database and around 50 tables): Doctrine::dropDatabases(); Doctrine::createDatabases(); Doctrine::createTablesFromModels(); The partially outputted SQL query shows that the error is around the Notification object which looks like this: Any leads would be highly appreciated! <?php // Connection Component Binding Doctrine_Manager::getInstance()->bindComponent('Notification', 'default'); /** * BaseNotification * * This class has been auto-generated by the Doctrine ORM Framework * * @property integer $n_auto_key * @property integer $type * @property string $title * @property string $message * @property timestamp $entry_date * @property timestamp $update_date * @property integer $u_auto_key * @property integer $c_auto_key * @property integer $ub_auto_key * @property integer $o_auto_key * @property integer $notified * @property integer $read * @property integer $urgence * * @package ##PACKAGE## * @subpackage ##SUBPACKAGE## * @author ##NAME## <##EMAIL##> * @version SVN: $Id: Builder.php 6820 2009-11-30 17:27:49Z jwage $ */ abstract class BaseNotification extends Doctrine_Record { public function setTableDefinition() { $this->setTableName('Notification'); $this->hasColumn('n_auto_key', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => true, 'autoincrement' => true, 'length' => '4', )); $this->hasColumn('type', 'integer', 1, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '1', )); $this->hasColumn('title', 'string', 50, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '50', )); $this->hasColumn('message', 'string', null, array( 'type' => 'string', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '', )); $this->hasColumn('entry_date', 'timestamp', 25, array( 'type' => 'timestamp', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '25', )); $this->hasColumn('update_date', 'timestamp', 25, array( 'type' => 'timestamp', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, 'length' => '25', )); $this->hasColumn('u_auto_key', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => true, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('c_auto_key', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('ub_auto_key', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('o_auto_key', 'integer', 4, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'notnull' => false, 'autoincrement' => false, 'length' => '4', )); $this->hasColumn('notified', 'integer', 1, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'default' => '0', 'notnull' => true, 'autoincrement' => false, 'length' => '1', )); $this->hasColumn('read', 'integer', 1, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'default' => '0', 'notnull' => true, 'autoincrement' => false, 'length' => '1', )); $this->hasColumn('urgence', 'integer', 1, array( 'type' => 'integer', 'fixed' => 0, 'unsigned' => false, 'primary' => false, 'default' => '0', 'notnull' => true, 'autoincrement' => false, 'length' => '1', )); } public function setUp() { parent::setUp(); } }

    Read the article

  • Insert using strored procedure from nhibernate

    - by jcreddy
    Hi I am using the following code snippets to insert values using stored procedure. the code is executing successfully but no record is inserted in DB. Please suggest with simple example. **---- stored procedure--------** Create PROCEDURE [dbo].[SampleInsert] @id int, @name varchar(50) AS BEGIN insert into test (id, name) values (@id, @name); END **------.hbm file-------** <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="Procedure"> exec SampleInsert :Id,:Name </sql-query> </hibernate-mapping> **--------c# code to insert value using above sp------** ISessionFactory sessionFactory = new Configuration().Configure().BuildSessionFactory(); ISession session = sessionFactory.OpenSession(); IQuery query = session.GetNamedQuery("Procedure"); query.SetParameter("Id", "222"); query.SetParameter("Name", "testsp"); query.SetResultTransformer(new NHibernate.Transform.AliasToBeanConstructorResultTransformer(typeof(Procedure).GetConstructors()[0])); Regards Jcreddy

    Read the article

  • pagination in php error

    - by fusion
    i've implemented this pagination class for my webpage in a separate file called class.pagination.php, but when i execute the page, nothing happens. it just displays a blank page. this is my search.php file, where i'm calling this class: <?php include 'config.php'; require ('class.pagination.php'); $search_result = ""; $search_result = $_GET["q"]; $search_result = trim($search_result); //Check if the string is empty if ($search_result == "") { echo "<p class='error'>Search Error. Please Enter Your Search Query.</p>" ; exit(); } //search query for multiple keywords if(!empty($search_result)) { // the table to search $table = "thquotes"; // explode search words into an array $arraySearch = explode(" ", $search_result); // table fields to search $arrayFields = array(0 => "cQuotes"); $countSearch = count($arraySearch); $a = 0; $b = 0; $query = "SELECT cQuotes, vAuthor, cArabic, vReference FROM ".$table." WHERE ("; $countFields = count($arrayFields); while ($a < $countFields) { while ($b < $countSearch) { $query = $query."$arrayFields[$a] LIKE '%$arraySearch[$b]%'"; $b++; if ($b < $countSearch) { $query = $query." AND "; } } $b = 0; $a++; if ($a < $countFields) { $query = $query.") OR ("; } } $query = $query.")"; $result = mysql_query($query, $conn) or die ('Error: '.mysql_error()); $totalrows = mysql_num_rows($result); if($totalrows < 1) { echo '<span class="error2">No matches found for "'.$search_result.'"</span>'; } else { ?> <div class="caption">Search Results</div> <div class="center_div"> <table> <?php while ($row= mysql_fetch_array($result, MYSQL_ASSOC)) { $cQuote = highlightWords(htmlspecialchars($row['cQuotes']), $search_result); ?> <tr> <td style="text-align:right; font-size:15px;"><?php h($row['cArabic']); ?></td> <td style="font-size:16px;"><?php echo $cQuote; ?></td> <td style="font-size:12px;"><?php h($row['vAuthor']); ?></td> <td style="font-size:12px; font-style:italic; text-align:right;"><?php h($row['vReference']); ?></td> </tr> <?php } ?> </table> </div> <?php } } else { exit(); } //Setting Pagination $pagination = new pagination(); $pagination->byPage = 5; $pagination->rows = $totalrows; // number of records in a table-back mysql_num_rows () instance or another, you have to play $from = $pagination->fromPagination(); // sql used for applications such LIMIT $ from, $ pagination-> byPage $pages = $pagination->pages(); if(isset($pages)) {?> <div class="pagination"> <?foreach ($pages as $key){?> <?if($key['current'] == 1) {?> <a href="?p=<?=$key['p']?>" class="active"><?=$key['page']?></a> <?} else {?> <a href="?p=<?=$key['p']?>" class="inactive"><?=$key['page']?></a> <?}?> <?}?> </div> <?} //End Pagination ?>

    Read the article

  • nodejs, mongodb - How do I operate on data from multiple queries?

    - by Ryan
    Hi all, I'm new to JS in general, but I am trying to query some data from MongoDB. Basically, my first query retrieves information for the session with the specified session id. The second query does a simple geospacial query for documents with a location near the specified location. I'm using the mongodb-native javascript driver. All of these query methods return their results in callbacks, so they're non-blocking. This is the root of my troubles. What I'm needing to do is retrieve the results of the second query, and create an Array of sessionIds of all the returned documents. Then I'm going to pass those to a function later. But, I can't generate this array and use it anywhere outside the callback. Does anyone have any idea how to properly do this? http://dpaste.org/wXiE/

    Read the article

  • Insert using stored procedure from nhibernate

    - by jcreddy
    Hi I am using the following code snippets to insert values using stored procedure. the code is executing successfully but no record is inserted in DB. Please suggest with simple example. **---- stored procedure--------** Create PROCEDURE [dbo].[SampleInsert] @id int, @name varchar(50) AS BEGIN insert into test (id, name) values (@id, @name); END **------.hbm file-------** <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <sql-query name="Procedure"> exec SampleInsert :Id,:Name </sql-query> </hibernate-mapping> **--------c# code to insert value using above sp------** ISessionFactory sessionFactory = new Configuration().Configure().BuildSessionFactory(); ISession session = sessionFactory.OpenSession(); IQuery query = session.GetNamedQuery("Procedure"); query.SetParameter("Id", "222"); query.SetParameter("Name", "testsp"); query.SetResultTransformer(new NHibernate.Transform.AliasToBeanConstructorResultTransformer(typeof(Procedure).GetConstructors()[0])); Regards Jcreddy

    Read the article

  • Linq to Entities strange deploying behavior.

    - by SO give me back my rep
    Hi I started building apps with this technology and I am facing a weird problem... on some machines I need to add theese lines to the app.config to get to work: <system.data> <DbProviderFactories> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.3.0.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> while in other machines it runs well without theese lines.... the thing is that when I add theese lines the app wont run on machines that did not needed theese lines in the firs place, and I would like not to publish to versions of the app, is there a way to solve this?

    Read the article

  • Using cachedwithin attibute inside cfquery

    - by Jason
    When you use the cachedwithin attribute in a cfquery how does it store the query in memory. Does it store it by only the name you assign to the query? For example, if on my index page I cache a query for an hour and name it getPeople will a query with the same name on a different page (or the same page for that matter) use the cached results or does it use some better logic to decide if it is the same query? Also, if there is a variable in your query does the cache take into account the value of the variable?

    Read the article

  • files get uploaded just before they get cancelled

    - by user1763986
    Got a little situation here where I am trying to cancel a file's upload. What I have done is stated that if the user clicks on the "Cancel" button, then it will simply remove the iframe so that it does not go to the page where it uploads the files into the server and inserts data into the database. Now this works fine if the user clicks on the "Cancel" button in quickish time the problem I have realised though is that if the user clicks on the "Cancel" button very late, it sometimes doesn't remove the iframe in time meaning that the file has just been uploaded just before the user has clicked on the "Cancel" button. So my question is that is there a way that if the file does somehow get uploaded before the user clicks on the "Cancel" button, that it deletes the data in the database and removes the file from the server? Below is the image upload form: <form action="imageupload.php" method="post" enctype="multipart/form-data" target="upload_target_image" onsubmit="return imageClickHandler(this);" class="imageuploadform" > <p class="imagef1_upload_process" align="center"> Loading...<br/> <img src="Images/loader.gif" /> </p> <p class="imagef1_upload_form" align="center"> <br/> <span class="imagemsg"></span> <label>Image File: <input name="fileImage" type="file" class="fileImage" /></label><br/> <br/> <label class="imagelbl"><input type="submit" name="submitImageBtn" class="sbtnimage" value="Upload" /></label> </p> <p class="imagef1_cancel" align="center"> <input type="reset" name="imageCancel" class="imageCancel" value="Cancel" /> </p> <iframe class="upload_target_image" name="upload_target_image" src="#" style="width:0px;height:0px;border:0px;solid;#fff;"></iframe> </form> Below is the jquery function which controls the "Cancel" button: $(imageuploadform).find(".imageCancel").on("click", function(event) { $('.upload_target_image').get(0).contentwindow $("iframe[name='upload_target_image']").attr("src", "javascript:'<html></html>'"); return stopImageUpload(2); }); Below is the php code where it uploads the files and inserts the data into the database. The form above posts to this php page "imageupload.php": <body> <?php include('connect.php'); session_start(); $result = 0; //uploads file move_uploaded_file($_FILES["fileImage"]["tmp_name"], "ImageFiles/" . $_FILES["fileImage"]["name"]); $result = 1; //set up the INSERT SQL query command to insert the name of the image file into the "Image" Table $imagesql = "INSERT INTO Image (ImageFile) VALUES (?)"; //prepare the above SQL statement if (!$insert = $mysqli->prepare($imagesql)) { // Handle errors with prepare operation here } //bind the parameters (these are the values that will be inserted) $insert->bind_param("s",$img); //Assign the variable of the name of the file uploaded $img = 'ImageFiles/'.$_FILES['fileImage']['name']; //execute INSERT query $insert->execute(); if ($insert->errno) { // Handle query error here } //close INSERT query $insert->close(); //Retrieve the ImageId of the last uploded file $lastID = $mysqli->insert_id; //Insert into Image_Question Table (be using last retrieved Image id in order to do this) $imagequestionsql = "INSERT INTO Image_Question (ImageId, SessionId, QuestionId) VALUES (?, ?, ?)"; //prepare the above SQL statement if (!$insertimagequestion = $mysqli->prepare($imagequestionsql)) { // Handle errors with prepare operation here echo "Prepare statement err imagequestion"; } //Retrieve the question number $qnum = (int)$_POST['numimage']; //bind the parameters (these are the values that will be inserted) $insertimagequestion->bind_param("isi",$lastID, 'Exam', $qnum); //execute INSERT query $insertimagequestion->execute(); if ($insertimagequestion->errno) { // Handle query error here } //close INSERT query $insertimagequestion->close(); ?> <!--Javascript which will output the message depending on the status of the upload (successful, failed or cancelled)--> <script> window.top.stopImageUpload(<?php echo $result; ?>, '<?php echo $_FILES['fileImage']['name'] ?>'); </script> </body> UPDATE: Below is the php code "cancelimage.php" where I want to delete the cancelled file from the server and delete the record from the database. It is set up but not finished, can somebody finish it off so I can retrieve the name of the file and it's id using $_SESSION? <?php // connect to the database include('connect.php'); /* check connection */ if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); die(); } //remove file from server unlink("ImageFiles/...."); //need to retrieve file name here where the ... line is //DELETE query statement where it will delete cancelled file from both Image and Image Question Table $imagedeletesql = " DELETE img, img_q FROM Image AS img LEFT JOIN Image_Question AS img_q ON img_q.ImageId = img.ImageId WHERE img.ImageFile = ?"; //prepare delete query if (!$delete = $mysqli->prepare($imagedeletesql)) { // Handle errors with prepare operation here } //Dont pass data directly to bind_param store it in a variable $delete->bind_param("s",$img); //execute DELETE query $delete->execute(); if ($delete->errno) { // Handle query error here } //close query $delete->close(); ?> Can you please provide an sample code in your answer to make it easier for me. Thank you

    Read the article

  • Empty database output in CI

    - by den-javamaniac
    Hi. I'm building a simple app and trying to test DB result output. But unfortunately all I'm getting is an array of size 0. Here's the controller code excerpt: $data['query'] = $this->db->query('SELECT role_id, role_privilege FROM role'); $this->load->view('welcome_message', $data); And a view code excerpt: <?php echo count($query->result_array())."<br/>"; foreach ($query->result() as $row){ echo $row->role_id . '<br/>'; echo $row->role_privilege . '<br/>'; } echo 'Total result '.$query->num_rows(); ?> And what I get is next: 0 Total result Running query from a command line gives a 2 rowed output. Can someone point out what i'm missing?

    Read the article

  • How many results were returned?

    - by Pastor Bones
    I'm performing a query on a worksheet. I want to update the row if it exists or insert it if it doesn't. How can you check if a result was returned or not? $query = new Zend_Gdata_Spreadsheets_ListQuery(); $query->setSpreadsheetKey($this->currKey); $query->setWorksheetId($this->currWkshtId); $query->setSpreadsheetQuery('cid = ' . $data['cid']); $listFeed = $this->gdClient->getListFeed($query); // This does not work! if(empty($listFeed)){ echo 'No results found!'; }

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >