Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 466/704 | < Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >

  • SQL Count in View as column

    - by alex
    I'm trying to get the result of a COUNT as a column in my view. Please see the below query for a demo of the kind of thing I want (this is just for demo purposes) SELECT ProductID, Name, Description, Price, (SELECT COUNT(*) FROM ord WHERE ord.ProductID = prod.ProductID) AS TotalNumberOfOrders FROM tblProducts prod LEFT JOIN tblOrders ord ON prod.ProductID = ord.ProductID This obviously isn't working... but I was wondering what the correct way of doing this would be?

    Read the article

  • .Net Remote Log Querying

    - by jlafay
    I have a Win Service that I'm working on that consists of the service, WF Service (using WorkflowServiceHost), a Workflow (WorkflowApplication) that queries/processes data from a SQL Server DB, and a Comm Marshall class that handles data flow between the service and the WF. The WF does a lot of heavy data processing and the original app (early VB6) logged all the processing and displayed the results on the screen of the host machine. Critical events will be committed to eventlog because I strongly believe that should be common practice because admins naturally will look there and because it already has support for remote viewing. The workflow will also need to write logging events as it processes and iterates according to our business logic. Such as: records queried, records returned, processed records, etc. The data is very critical and we need to log actions as they occur. The logs are currently kept as text files on disk and I think that is ok. Ideally I would like to record log events in XML so it's easier to query and because it is less costly than a DB, especially since our DB servers do a lot of heavy processing anyways. Since we are replacing essentially a VB6 application with a robust windows service (taking advantage of WF 4.0), it has been requested that a remote client also be created. It receives callbacks from the service after subscribing to it and being added to a collection of subscribers. Basic statistics and summaries are updated client side after receiving basic monitoring data of what is going on with the service. We would like to also provide a way to provide details when we need to examine what is going on further because this is a long running data processing service and issues need to be addressed immediately. What is the best way to implement some type of query from the client that is sent to the service and returned to the client? Would it be efficient to implement another method to expose on the service and then have that pass that off to some querying class/object to examine the XML files by whichever specification and then return it to the client? That's the main concern. I don't want the service to processing to bottleneck much while this occurs. It seems that WF already auto-magically threads well for the most part but I want to make sure this is the right way to go about it. Any suggestions/recommendations on how to architect and implement a small log querying framework for a remote service would be awesome.

    Read the article

  • Mongodb querying for multiple parameters

    - by gaggina
    I've this collections { "name" : "montalto", "users" : [ { "username" : "ciccio", "email" : "aaaaaaaa", "password" : "aaaaaaaa", "money" : 0 } ], "numers" : "8", "_id" : ObjectId("5040d3fded299bf03a000002") } If I want to search for a collection with the name of montalto and a user named ciccio I'm using the following query: db.coll.find({name:'montalto', users:{username:'ciccio'}}).count() But it does not work. Where I went wrong?

    Read the article

  • Calculated group-by fields in MongoDB

    - by Navin Viswanath
    For this example from the MongoDB documentation, how do I write the query using MongoTemplate? db.sales.aggregate( [ { $group : { _id : { month: { $month: "$date" }, day: { $dayOfMonth: "$date" }, year: { $year: "$date" } }, totalPrice: { $sum: { $multiply: [ "$price", "$quantity" ] } }, averageQuantity: { $avg: "$quantity" }, count: { $sum: 1 } } } ] ) Or in general, how do I group by a calculated field?

    Read the article

  • apc_delete() not working in background script

    - by Jared
    I have a shell background convertor on my video website and I can't seem to get APC to delete a key as a file is uploaded and its visibility is updated. The script is structured like so: if(file_exists($output_file)) { $conn->query("UPDATE `foo` SET `bar` = 1 WHERE `id` = ".$id." LIMIT 1"); apc_delete('feed:'.$id); } Everything works fine except for the APC and this is the only script on the site that has had this problem. I'm stumped.

    Read the article

  • Mysql select - improve performances

    - by realshadow
    Hey, I am working on an e-shop which sells products only via loans. I display 10 products per page in any category, each product has 3 different price tags - 3 different loan types. Everything went pretty well during testing time, query execution time was perfect, but today when transfered the changes to the production server, the site "collapsed" in about 2 minutes. The query that is used to select loan types sometimes hangs for ~10 seconds and it happens frequently and thus it cant keep up and its hella slow. The table that is used to store the data has approximately 2 milion records and each select looks like this: SELECT * FROM products_loans WHERE KOD IN("X17/Q30-10", "X17/12", "X17/5-24") AND 369.27 BETWEEN CENA_OD AND CENA_DO; 3 loan types and the price that needs to be in range between CENA_OD and CENA_DO, thus 3 rows are returned. But since I need to display 10 products per page, I need to run it trough a modified select using OR, since I didnt find any other solution to this. I have asked about it here, but got no answer. As mentioned in the referencing post, this has to be done separately since there is no column that could be used in a join (except of course price and code, but that ended very, very badly). Here is the show create table, kod and CENA_OD/CENA_DO very indexed via INDEX. CREATE TABLE `products_loans` ( `KOEF_ID` bigint(20) NOT NULL, `KOD` varchar(30) NOT NULL, `AKONTACIA` int(11) NOT NULL, `POCET_SPLATOK` int(11) NOT NULL, `koeficient` decimal(10,2) NOT NULL default '0.00', `CENA_OD` decimal(10,2) default NULL, `CENA_DO` decimal(10,2) default NULL, `PREDAJNA_CENA` decimal(10,2) default NULL, `AKONTACIA_SUMA` decimal(10,2) default NULL, `TYP_VYHODY` varchar(4) default NULL, `stage` smallint(6) NOT NULL default '1', PRIMARY KEY (`KOEF_ID`), KEY `CENA_OD` (`CENA_OD`), KEY `CENA_DO` (`CENA_DO`), KEY `KOD` (`KOD`), KEY `stage` (`stage`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 And also selecting all loan types and later filtering them trough php doesnt work good, since each type has over 50k records and the select takes too much time as well... Any ides about improving the speed are appreciated. Edit: Here is the explain +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ | 1 | SIMPLE | products_loans | range | CENA_OD,CENA_DO,KOD | KOD | 92 | NULL | 190158 | Using where | +----+-------------+----------------+-------+---------------------+------+---------+------+--------+-------------+ I have tried the combined index and it improved the performance on the test server from 0.44 sec to 0.06 sec, I cant access the production server from home though, so I will have to try it tomorrow.

    Read the article

  • how to enter manual time stamp in get date ()

    - by Arunachalam
    how to enter manual time stamp in get date () ? select conver(varchar(10),getdate(),120) returns 2010-06-07 now i want to enter my own time stamp in this like 2010-06-07 10.00.00.000 i m using this in select * from sample table where time_stamp ='2010-06-07 10.00.00.000' since i m trying to automate this query i need the current date but i need different time stamp can it be done .

    Read the article

  • MySQL: Is it possible to compute MAX( AVG (field) )?

    - by Brad
    My current query reads: SELECT entry_id, user_id, cat_id, AVG( rating ) as avg_rate FROM entry_rate WHERE 1 GROUP BY entry_id cat_id relates to different categories: 1, 2, 3 or 4 Is there a way I can find the maximum average for each user in each category without setting up an additional table? The return could potentially be 4 maximum avg_rate for each user_id Visit the link below for example: http://lh5.ggpht.com/_rvDQuhTddnc/S8Os_77qR9I/AAAAAAAAA2M/IPmzNeYjfCA/s800/table1.jpg

    Read the article

  • How to validate Login Details using Google apps API ?

    - by Pari
    Hi, I am using below code to to create Contact Service and to Validate login Details: ContactsService obj_ContactService = new ContactsService(""); obj_ContactService.setUserCredentials(userEmail, password); But even if user enters invalid detail above code does not throw any exception. User get verified only when i call "Insert" query after adding whole contact details. But in my application i want to notify user immediately after user enters login details. Thanx

    Read the article

  • Question regarding MySQL indices and their functionality

    - by user281434
    Hi Say I have an ordinary table in my db like so ---------------------------- | id | username | password | ---------------------------- | 24 | blah | blah | ---------------------------- A primary key is assigned to the id column. Now when I run a Mysql query like this: SELECT id FROM table WHERE username = 'blah' LIMIT 1 Does that primary key index even help? If I am telling it to match usernames, then shouldn't the username column be indexed instead? Thanks for your time

    Read the article

  • T-SQL: from rows to columns but not an actual pivot

    - by Matte
    Is there a T-SQL (SQL Server 2008R2) query to transform TABLE_1 into the expected resultset? TABLE_1 +----------+-------------------------+---------+------+ | IdDevice | Timestamp | M300 | M400 | +----------+-------------------------+---------+------+ | 3 | 2012-12-05 16:29:51.000 | 2357,69 | 520 | | 6 | 2012-12-05 16:29:51.000 | 1694,81 | 470 | | 1 | 2012-12-05 16:29:51.000 | 2046,33 | 111 | +----------+-------------------------+---------+------+ Expected resultset +-------------------------+---------+--------+---------+--------+---------+--------+ | Timestamp | 3_M300 | 3_M400 | 6_M300 | 6_M400 | 6_M300 | 6_M400 | +-------------------------+---------+--------+---------+--------+---------+--------+ | 2012-12-05 16:29:51.000 | 2357,69 | 520 | 1694,81 | 470 | 2046,33 | 111 | +-------------------------+---------+--------+---------+--------+---------+--------+

    Read the article

  • GROUP BY a date, with ordering by date.

    - by standard
    Take this simple query: SELECT DATE_FORMAT(someDate, '%y-%m-%d') as formattedDay FROM someTable GROUP BY formatterDay This will select rows from a table with only 1 row per date. How do I ensure that the row selected per date is the earliest for that date, without doing an ordered subquery in the FROM? Cheers

    Read the article

  • php download file slows

    - by hobbywebsite
    OK first off thanks for your time I wish I could give more than one point for this question. Problem: I have some music files on my site (.mp3) and I am using a php file to increment a database to count the number of downloads and to point to the file to download. For some reason this method starts at 350kb/s then slowly drops to 5kb/s which then the file says it will take 11hrs to complete. BUT if I go directly to the .mp3 file my browser brings up a player and then I can right click and "save as" which works fine complete download in 3mins. (Yes both during the same time for those that are thinking it's my connection or ISP and its not my server either.) So the only thing that I've been playing around with recently is the php.ini and the .htcaccess files. So without further ado, the php file, php.ini, and the .htcaccess: download.php <?php include("config.php"); include("opendb.php"); $filename = 'song_name'; $filedl = $filename . '.mp3'; $query = "UPDATE songs SET song_download=song_download+1 WHER song_linkname='$filename'"; mysql_query($query); header('Content-Disposition: attachment; filename='.basename($filedl)); header('Content-type: audio/mp3'); header('Content-Length: ' . filesize($filedl)); readfile('/music/' . $filename . '/' . $filedl); include("closedb.php"); ?> php.ini register_globals = off allow_url_fopen = off expose_php = Off max_input_time = 60 variables_order = "EGPCS" extension_dir = ./ upload_tmp_dir = /tmp precision = 12 SMTP = relay-hosting.secureserver.net url_rewriter.tags = "a=href,area=href,frame=src,input=src,form=,fieldset=" ; Defines the default timezone used by the date functions date.timezone = "America/Los_Angeles" .htaccess Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} !^(www.MindCollar.com)?$ [NC] RewriteRule (.*) http://www.MindCollar.com/$1 [R=301,L] <IfModule mod_rewrite.c> RewriteEngine On ErrorDocument 404 /errors/404.php ErrorDocument 403 /errors/403.php ErrorDocument 500 /errors/500.php </IfModule> Options -Indexes Options +FollowSymlinks <Files .htaccess> deny from all </Files> thanks for you time

    Read the article

< Previous Page | 462 463 464 465 466 467 468 469 470 471 472 473  | Next Page >