Search Results

Search found 22756 results on 911 pages for 'power query'.

Page 741/911 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • SQL syntax Newbie student

    - by sammysmall
    Describe the output of the following SQL query: select custId, name from customer where region = "New York" UNION select cust.custId, cust.name from customer cust where cust.custId IN (select cust_order.custId from customer_order cust_order, company_employee comp_emp where cust_order.salesEmpId = comp_emp.empId AND comp_emp.name = 'DANIEL'); My question is: at line from customer cust is cust referring to a column in the customer table... This is a homework question I have identified the components leading up to this line and I think that cust is a column in the customer table... I am not asking for an overall solution just a bit of encouragement if I am on the right track...

    Read the article

  • can I run new WP_Query inside the Loop with no affects to the Loop? (wordpress)

    - by Radek
    the bellow function is working fine but I need to run it inside the loop. If done so the post content is actually taken from the last post of my WP_Query. Not from the one that should appear. Is there any way to run my query and leave The Loop unaffected? function recent_post_by_author() { echo '<div class="recent_post_by_author">'; $my_query = new WP_Query('author_name=Radek&showposts=2'); while ($my_query->have_posts()) : $my_query->the_post(); ?> <a href="<?php the_permalink() ?>" title="<?php the_title(); ?>"> <?php the_title(); ?></a><BR> <?php endwhile; echo '</div>'; }

    Read the article

  • Communicate between content script and options page

    - by Gaurang Tandon
    I have seen many questions already and all are about background page to content script. Summary My extension has an options page, and a content script. The content script handles the storage functionality (chrome.storage manipulation). Whenever, a user changes a setting in the options page, I want to send a message to the content script to store the new data. My code: options.js var data = "abcd"; // let data chrome.tabs.query({ active: true, currentWindow: true }, function (tabs) { chrome.tabs.sendMessage(tabs[0].id, "storeData:" + data, function(response){ console.log(response); // gives undefined :( }); }); content script js chrome.runtime.onMessage.addListener(function(request, sender, sendResponse) { // not working }); My question: Why isn't the approach not working? Is there any other (better) approach for this procedure.

    Read the article

  • Windows for IoT, continued

    - by Valter Minute
    Originally posted on: http://geekswithblogs.net/WindowsEmbeddedCookbook/archive/2014/08/05/windows-for-iot-continued.aspxI received many interesting feedbacks on my previous blog post and I tried to find some time to do some additional tests. Bert Kleinschmidt pointed out that pins 2,3 and 10 of the Galileo are connected directly to the SOC, while pin 13, the one used for the sample sketch is controlled via an I2C I/O expander. I changed my code to use pin 2 instead of 13 (just changing the variable assignment at the beginning of the code) and latency was greatly reduced. Now each pulse lasts for 1.44ms, 44% more than the expected time, but ways better that the result we got using pin 13. I also used SetThreadPriority to increase the priority of the thread that was running the sketch to THREAD_PRIORITY_HIGHEST but that didn't change the results. When I was using the I2C-controlled pin I tried the same and the timings got ways worse (increasing more than 10 times) and so I did not commented on that part, wanting to investigate the issua a bit more in detail. It seems that increasing the priority of the application thread impacts negatively the I2C communication. I tried to use also the Linux-based implementation (using a different Galileo board since the one provided by MS seems to use a different firmware) and the results of running the sample blink sketch modified to use pin 2 and blink the led for 1ms are similar to those we got on the same board running Windows. Here the difference between expected time and measured time is worse, getting around 3.2ms instead of 1 (320% compared to 150% using Windows but far from the 100.1% we got with the 8-bit Arduino). Both systems were not under load during the test, maybe loading some applications that use part of the CPU time would make those timings even less reliable, but I think that those numbers are enough to draw some conclusions. It may not be worth running a full OS if what you need is Arduino compatibility. The Arduino UNO is probably the best Arduino you can find to perform this kind of development. The Galileo running the Linux-based stack or running Windows for IoT is targeted to be a platform for "Internet of Things" devices, whatever that means. At the moment I don't see the "I" part of IoT. We have low level interfaces (SPI, I2C, the GPIO pins) that can be used to connect sensors but the support for connectivity is limited and the amount of work required to deliver some data to the cloud (using a secure HTTP request or a message queuing system like APMQS or MQTT) is still big and the rich OS underneath seems to not provide any help doing that.Why should I use sockets and can't access all the high level connectivity features we have on "full" Windows?I know that it's possible to use some third party libraries, try to build them using the Windows For IoT SDK etc. but this means re-inventing the wheel every time and can also lead to some IP concerns if used for products meant to be closed-source. I hope that MS and Intel (and others) will focus less on the "coolness" of running (some) Arduino sketches and more on providing a better platform to people that really want to design devices that leverage internet connectivity and the cloud processing power to deliver better products and services. Providing a reliable set of connectivity services would be a great start. Providing support for .NET would be even better, leaving native code available for hardware access etc. I know that those components may require additional storage and memory etc. So making the OS componentizable (or, at least, provide a way to install additional components) would be a great way to let developers pick the parts of the system they need to develop their solution, knowing that they will integrate well together. I can understand that the Arduino and Raspberry Pi* success may have attracted the attention of marketing departments worldwide and almost any new development board those days is promoted as "XXX response to Arduino" or "YYYY alternative to Raspberry Pi", but this is misleading and prevents companies from focusing on how to deliver good products and how to integrate "IoT" features with their existing offer to provide, at the end, a better product or service to their customers. Marketing is important, but can't decide the key features of a product (the OS) that is going to be used to develop full products for end customers integrating it with hardware and application software. I really like the "hackable" nature of open-source devices and like to see that companies are getting more and more open in releasing information, providing "hackable" devices and supporting developers with documentation, good samples etc. On the other side being able to run a sketch designed for an 8 bit microcontroller on a full-featured application processor may sound cool and an easy upgrade path for people that just experimented with sensors etc. on Arduino but it's not, in my humble opinion, the main path to follow for people who want to deliver real products.   *Shameless self-promotion: if you are looking for a good book in Italian about the Raspberry Pi , try mine: http://www.amazon.it/Raspberry-Pi-alluso-Digital-LifeStyle-ebook/dp/B00GYY3OKO

    Read the article

  • Reported error code considered SQL Injection?

    - by inquam
    SQL injection that actually runs a SQL command is one thing. But injecting data that doesn't actually run a harmful query but that might tell you something valuable about the database, is that considered SQL injection? Or is it just used as part to construct a valid SQL injection? An example could be set rs = conn.execute("select headline from pressReleases where categoryID = " & cdbl(request("id")) ) Passing this a string that could not be turned into a numeric value would cause Microsoft VBScript runtime error '800a000d' Type mismatch: 'cdbl' which would tell you that the column in question only accepts numeric data and is thus probably of type integer or similar. I seem to find this in a lot of pages discussing SQL injection, but don't really get an answer if this in itself is considered SQL injection. The reason for my question is that I have a scanning tool that report a SQL injection vulnerability and reports a VBScript runtime error '800a000d' as the reason for the finding.

    Read the article

  • Counting consecutive items within MS SQL

    - by Greg
    Got a problem with a query I'm trying to write. I have a table that lists people that have been sent an email. There is a bit column named Active which is set to true if they have responded. But I need to count the number of consecutive emails the person has been inactive since either their first email or last active email. For example, this basic table shows one person has been sent 9 emails. They have been active within two of the emails (3 & 5). So their inactive count would be 4 as we are counting from email number 6 onwards. PersonID(int) EmailID(int) Active(bit) 1 1 0 1 2 0 1 3 1 1 4 0 1 5 1 1 6 0 1 7 0 1 8 0 1 9 0 Any pointers or help would be great. Regards Greg

    Read the article

  • NVP request - CreateRecurringPaymentsProfile

    - by jiwanje.mp
    Im facing problem with Trail period and first month payemnt. My requirement is users can signup with trial period which allow new users to have a 30 day free trial. This means they will not be charged the monthly price until after the first 30 days the regular amount will be charged to user. but the next billing date should be one month later the profile start date. but next billing data and profile start date shows same when i query by GetRecurringPaymentProfile? Please help me how can i send the Recurring bill payment for this functionality. Thanks in advance, jiwan

    Read the article

  • BYOD-The Tablet Difference

    - by Samantha.Y. Ma
    By Allison Kutz, Lindsay Richardson, and Jennifer Rossbach, Sales Consultants Normal 0 false false false EN-US ZH-TW X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Less than three years ago, Apple introduced a new concept to the world: The Tablet. It’s hard to believe that in only 32 months, the iPad induced an entire new way to do business. Because of their mobility and ease-of-use, tablets have grown in popularity to keep up with the increasing “on the go” lifestyle, and their popularity isn’t expected to decrease any time soon. In fact, global tablet sales are expected to increase drastically within the next five years, from 56 million tablets to 375 million by 2016. Tablets have been utilized for every function imaginable in today’s world. With over 730,000 active applications available for the iPad, these tablets are educational devices, portable book collections, gateways into social media, entertainment for children when Mom and Dad need a minute on their own, and so much more. It’s no wonder that 74% of those who own a tablet use it daily, 60% use it several times a day, and an average of 13.9 hours per week are spent tapping away. Tablets have become a critical part of a user’s personal life; but why stop there? Businesses today are taking major strides in implementing these devices, with the hopes of benefiting from efficiency and productivity gains. Limo and taxi drivers use tablets as payment devices instead of traditional cash transactions. Retail outlets use tablets to find the exact merchandise customers are looking for. Professors use tablets to teach their classes, and business professionals demonstrate solutions and review reports from tablets. Since an overwhelming majority of tablet users have started to use their personal iPads, PlayBooks, Galaxys, etc. in the workforce, organizations have had to make a change. In many cases, companies are willing to make that change. In fact, 79% of companies are making new investments in mobility this year. Gartner reported that 90% of organizations are expected to support corporate applications on personal devices by 2014. It’s not just companies that are changing. Business professionals have become accustomed to tablets making their personal lives easier, and want that same effect in the workplace. Professionals no longer want to waste time manually entering data in their computer, or worse yet in a notebook, especially when the data has to be later transcribed to an online system. The response: the Bring Your Own Device phenomenon. According to Gartner, BOYD is “an alternative strategy allowing employees, business partners and other users to utilize a personally selected and purchased client device to execute enterprise applications and access data.” Employees whose companies embrace this trend are more efficient because they get to use devices they are already accustomed to. Tablets change the game when it comes to how sales professionals perform their jobs. Sales reps can easily store and access customer information and analytics using tablet applications, such as Oracle Fusion Tap. This method is much more enticing for sales reps than spending time logging interactions on their (what seem to be outdated) computers. Forrester & IDC reported that on average sales reps spend 65% of their time on activities other than selling, so having a tablet application to use on the go is extremely powerful. In February, Information Week released a list of “9 Powerful Business Uses for Tablet Computers,” ranging from “enhancing the customer experience” to “improving data accuracy” to “eco-friendly motivations”. Tablets compliment the lifestyle of professionals who strive to be effective and efficient, both in the office and on the road. Three Things Businesses Need to do to Embrace BYOD Make customer-facing websites tablet-friendly for consistent user experiences Develop tablet applications to continue to enhance the customer experience Embrace and use the technology that comes with tablets Almost 55 million people in the U.S. own tablets because they are convenient, easy, and powerful. These are qualities that companies strive to achieve with any piece of technology. The inherent power of the devices coupled with the growing number of business applications ensures that tablets will transform the way that companies and employees perform.

    Read the article

  • What is the scope of JS variables in anonymous functions

    - by smorhaim
    Why does this code returns $products empty? If I test for $products inside the function it does show data... but once it finishes I can't seem to get the data. var $products = new Array(); connection.query($sql, function(err, rows, fields) { if (err) throw err; for(i=0; i< rows.length; i++) { $products[rows[i].source_identifier] = "xyz"; } }); connection.end(); console.log($products); // Shows empty.

    Read the article

  • LOAD DATA INFILE not working in mariadb

    - by Haseena
    Iam trying to migrate from mysql to mariadb. On this time I can face an issue with mariadb. When I can trying to load a data file into a table, it shows an error like : SQL Error (29): File 'C:/Documents and Settings/Administrator/Local Settings/Temp/SAMPLE/DATA_TEMP1351761841668/SampleFile0' not found (Errcode: 2) But the file already exists in the path.... Another one point is that the same command successfully works with MySQL. Is MariaDB has any permission issue? Login as Administrator. See below my query : load data infile "'C:/Documents and Settings/Administrator/Local Settings/Temp/SAMPLE/DATA_TEMP1351761841668/SampleFile0" into table SAMPLETABLE; When changing the path loke "C:/SampleFile0", its working properly. From Administrator folder it doesn't working. Can anyone help me in this regard??? Iam a newone in MariaDB.

    Read the article

  • How to get equivalent of ResultSetMetaData without ResultSet

    - by javanix
    Hey Guys - I need to resolve a bunch of column names to column indexes (so as to use some of the nice ResultSetMetaData methods). However, the only way that I know how to get a ResultSetMetaData object is by calling getMetaData() on some ResultSet. The problem I have with that is that grabbing a ResultSet takes up uneccesary resources in my mind - I don't really need to query the data in the table, I just want some information about the table. Does anyone know of any way to get a RSMD object without getting a ResultSet (from a potentially huge table) first? Thanks!

    Read the article

  • Disk-based caching of dynamic images in IIS 7

    - by Daniel Schierbeck
    I'm writing an image server which needs to handle a relatively large number of concurrent requests (~5,000). The images being served are dynamically scaled down and cropped based on per-image specifications, which are queried from a database. The number of images is rather large, so an in-memory cache isn't viable (thrashing would most definitely occur). I'm using native caching in IIS 7 to avoid hitting the ASP.NET app which generates the images on-the-fly. I've looked around, but I couldn't find a simple way to configure IIS to store the cache on-disk -- is there such an option, or would I need to roll my own? I'd rather avoid placing the generated images in a public folder, so they can be served statically, since I would prefer to invalidate the cache entries using a query parameter (last-edit time from the database,) which doesn't seem possible to reconcile with static caching. I would love to get some feedback on this!

    Read the article

  • How to add Transactions with a DataSet created using the Add Connection Wizard?

    - by RoguePlanetoid
    I have a DataSet that I have added to my project where I can Insert and Add records using the Add Query function in Visual Studio 2010, however I want to add transactions to this, I have found a few examples but cannot seem to find one that works with these. I know I need to use the SQLClient.SQLTransaction Class somehow. I used the Add New Data Source Wizard and added the Tables/View/Functions I need, I just need an example using this process such as How to get the DataConnection my DataSet has used. Assuming all options have been set in the wizard and I am only using the pre-defined adapters and options asked for in this wizard, how to I add the Transaction logic to my Database. For example I have a DataSet called ProductDataSet with the XSD created for this, I have then added my Stock table as a Datasource and Added an AddStock method with a wizard, this also if a new item calls an AddItem method, if either of these fails I want to rollback the AddItem and AddStock in this case.

    Read the article

  • How to get object properties of specific class in SPARQL

    - by udayalkonline
    hi, I have some ontology(campus.owl).There are tree classes(Student,Sport,Lecturer).Student class is joined with Lecturer class using "has" object property and Student class joined with Sport class with "isPlay" object property. problem I want to get the object property between Student and Lecturer using some SPARQL query. PREFIX rdfs: http://www.w3.org/2000/01/rdf-schema# PREFIX rdf: http://www.w3.org/1999/02/22-rdf-syntax-ns# PREFIX my: http://www.semanticweb.org/ontologies/2010/5/Ontology1275975684120.owl# SELECT ?prop WHERE { ?prop ..........??? } Any Idea..?? Thank in advance!

    Read the article

  • problem with sql job and datetime parameter

    - by geoff swartz
    Another developer created a stored procedure that is set up to run as a sql job each month. It takes one parameter of datetime. When I try to invoke it in the job or in just a query window I get an error "Incorrect syntax near ')'" The call to execute it is... exec CreateHeardOfUsRecord getdate() When I give it a hard coded date like exec CreateHeardOfUsRecord '4/1/2010' it works fine. Any idea why I can't use getdate() in this context? Thanks.

    Read the article

  • Correct Way to Get Date Between Dates In SQL Server

    - by Chuck Haines
    I have a table in SQL server which has a DATETIME field called Date_Printed. I am trying to get all records in the table which lie between a specified date range. Currently I am using the following SQL DECLARE @StartDate DATETIME DECLARE @EndDate DATETIME SET @StartDate = '2010-01-01' SET @EndDate = '2010-06-18 12:59:59 PM' SELECT * FROM table WHERE Date_Printed BETWEEN @StartDate AND @EndDate I have an index on the Date_Printed column. I was wondering if this is the best way to get the rows in the table which lie between those date or if there is a faster way. The table has about 750,000 records in it right now and it will continue to grow. The query is pretty fast but I'd like to make it faster if possible.

    Read the article

  • What is the effect of running an application with "Unlimited Stack" size

    - by NSA
    Hello All, I have inherited some code that I need to maintain that can be less than stable at times. The previous people are no longer available to query as to why they ran the application in an environment with unlimited stack set, I am curious what the effects of this could be? The application seems to have some unpredictable memory bugs that we cannot find and running the application under valgrind is not an option because it slows the application down so much that we cannot actually run it. So any thoughts on what the effects of this might be are appreciated. Thank you.

    Read the article

  • In-memory Database in Excel

    - by user329174
    Hello, I am looking for a way to import a datatable from Access into an Excel variable and then run queries through this variable to speed up the process. I am trying to migrate from C# .NET where I read a data table from an access database into memory and then used LINQ to query this dataset. It is MUCH faster than how I have it currently coded in VBA where I must make lots of calls to the actual database, which is slow. I have seen the QueryTable mentioned, but it appears that this requires pasting the data into the excel sheet. I would like to keep everything in memory and minimize the interaction between the Excel Sheet and the VBA code as much as possible. I wish we didn't need to use Excel+VBA to do this, but we're kind of stuck with that for now. Thanks for the help!

    Read the article

  • NHibernate - is property lazy loading possible?

    - by Ben
    I've got some binary data that I store and was going to separate this out into a separate table so it could be lazy loaded. However, i then came across this post by Ayende (http://ayende.com/Blog/archive/2010/01/27/nhibernate-new-feature-lazy-properties.aspx) which suggests that property lazy loading is now possible. I have added the lazy="true" attribute to my property mapping but the field is still loaded from the database (I am using a simple text field to test). My query: return _session.CreateQuery("from Product") .SetMaxResults(1) .UniqueResult<Product>(); Mapping: <property name="Description" type="string" column="FullDescription" lazy="true"/> Has anyone been able to get this working? Personally I prefer this approach than having to add another table to my database.

    Read the article

  • 24 Hours of PASS – first reflections

    - by Rob Farley
    A few days after the end of 24HOP, I find myself reflecting on it. I’m still waiting on most of the information. I want to be able to discover things like where the countries represented on each of the sessions, and things like that. So far, I have the feedback scores and the numbers of attendees. The data was provided in a PDF, so while I wait for it to appear in a more flexible format, I’ve pushed the 24 attendee numbers into Excel. This chart shows the numbers by time. Remember that we started at midnight GMT, which was 10:30am in my part of the world and 8pm in New York. It’s probably no surprise that numbers drooped a bit at the start, stayed comparatively low, and then grew as the larger populations of the English-speaking world woke up. I remember last time 24HOP ran for 24 hours straight, there were quite a few sessions with less than 100 attendees. None this time though. We got close, but even when it was 4am in New York, 8am in London and 7pm in Sydney (which would have to be the worst slot for attracting people), we still had over 100 people tuning in. As expected numbers grew as the UK woke up, and even more so as the US did, with numbers peaking at 755 for the “3pm in New York” session on SQL Server Data Tools. Kendra Little almost reached those numbers too, and certainly contributed the biggest ‘spike’ on the chart with her session five hours earlier. Of all the sessions, Kendra had the highest proportion of ‘Excellent’s for the “Overall Evaluation of the session” question, and those of you who saw her probably won’t be surprised by that. Kendra had one of the best ranked sessions from the 24HOP event this time last year (narrowly missing out on being top 3), and she has produced a lot of good video content since then. The reports indicate that there were nearly 8.5 thousand attendees across the 24 sessions, averaging over 350 at each one. I’m looking forward to seeing how many different people that was, although I do know that Wil Sisney managed to attend every single one (if you did too, please let me know). Wil even moderated one of the sessions, which made his feat even greater. Thanks Wil. I also want to send massive thanks to Dave Dustin. Dave probably would have attended all of the sessions, if it weren’t for a power outage that forced him to take a break. He was also a moderator, and it was during this session that he earned special praise. Part way into the session he was moderating, the speaker lost connectivity and couldn’t get back for about fifteen minutes. That’s an incredibly long time when you’re in a live presentation. There were over 200 people tuned in at the time, and I’m sure Dave was as stressed as I was to have a speaker disappear. I started chasing down a phone number for the speaker, while Dave spoke to the audience. And he did brilliantly. He started answering questions, and kept doing that until the speaker came back. Bear in mind that Dave hadn’t expected to give a presentation on that topic (or any other), and was simply drawing on his SQL expertise to get him through. Also consider that this was between midnight at 1am in Dave’s part of the world (Auckland, NZ). I would’ve been expecting just to welcome people, monitor questions, probably read some out, and in general, help make things run smoothly. He went far beyond the call of duty, and if I had a medal to give him, he’d definitely be getting one. On the whole, I think this 24HOP was a success. We tried a different platform, and I think for the most part it was a popular move. We didn’t ask the question “Was this better than LiveMeeting?”, but we did get a number of people telling us that they thought the platform was very good. Some people have told me I get a chance to put my feet up now that this is over. As I’m also co-ordinating a tour of SQLSaturday events across the Australia/New Zealand region, I don’t quite get to take that much of a break (plus, there’s the little thing of squeezing in seven SQL 2012 exams over the next 2.5 weeks). But I am pleased to be reflecting on this event rather than anticipating it. There were a number of factors that could have gone badly, but on the whole I’m pleased about how it went. A massive thanks to everyone involved. If you’re reading this and thinking you wish you could’ve tuned in more, don’t worry – they were all recorded and you’ll be able to watch them on demand very soon. But as well as that, PASS has a stream of content produced by the Virtual Chapters, so you can keep learning from the comfort of your desk all year round. More info on them at sqlpass.org, of course.

    Read the article

  • How to make a sum of total for each id

    - by JetJack
    Using Crystal report 7 I want to view the table 1 and sum of table2 table1 id name 001 raja 002 vijay 003 suresh .... table2 id value 001 100 001 200 001 150 002 200 003 150 003 200 ... I want to display all the rows from table1 and sum(values) from table2. How to do this in crystal report Expected Output id name value 001 raja 450 002 vijay 200 003 suresh 350 .... Note: I add the table field directly to the report, i am not added store procedure or views or query in the report. How to do this. Need Crystal report help

    Read the article

  • LINQ: How to Use RemoveAll without using For loop with Array

    - by CrimsonX
    I currently have a log object I'd like to remove objects from, based on a LINQ query. I would like to remove all records in the log if the sum of the versions within a program are greater than 60. Currently I'm pretty confident that this'll work, but it seems kludgy: for (int index = 0; index < 4; index++) { Log.RemoveAll(log => (log.Program[index].Version[0].Value + log.Program[index].Version[1].Value + log.Program[index].Version[2].Value ) > 60); } The Program is an array of 4 values and version has an array of 3 values. Is there a more simple way to do this RemoveAll in LINQ without using the for loop? Thanks for any help in advance!

    Read the article

  • How to calculate real-time stats?

    - by Diego Jancic
    I have a site with millions of users (well, actually it doesn't have any yet, but let's imagine), and I want to calculate some stats like "log-ins in the past hour". The problem is similar to the one described here: http://highscalability.com/blog/2008/4/19/how-to-build-a-real-time-analytics-system.html The simplest approach would be to do a select like this: select count(distinct user_id) from logs where date>='20120601 1200' and date <='20120601 1300' (of course other conditions could apply for the stats, like log-ins per country) Of course this would be really slow, mainly if it has millions (or even thousands) of rows, and I want to query this every time a page is displayed. How would you summarize the data? What should go to the (mem)cache? EDIT: I'm looking for a way to de-normalize the data, or to keep the cache up-to-date. For example I could increment an in-memory variable every time someone logs in, but that would help to know the total amount of logins, not the "logins in the last hour". Hope it's more clear now.

    Read the article

  • Exchange 2003 Public Folder Replica list

    - by Niall
    Hi, I am trying to update a replica list on a Exchange 2003 public folder. I am using the WMI namespace exchange_publicfolder to try and add an Exchange server (using the servers DN) to the AddReplica procedure. Every time I run this I get an invalid parameter as an exception. Below is the code that I am using to do this. WMI.Connect(Server, credentials) Using WMISearcher As New ManagementObjectSearcher(WMI.Scope, & _ New ObjectQuery(String.Format("SELECT * FROM Exchange_Publicfolder WHERE path='{0}'", Name))) Using PublicFolder As ManagementObjectCollection = WMISearcher.Get For Each Folder As ManagementObject In PublicFolder Dim BaseFolder As ManagementBaseObject = Folder.GetMethodParameters("AddReplica") BaseFolder("path") = ServerDN Folder.InvokeMethod("AddReplica", BaseFolder, Nothing) Next End Using End Using I have used WMI before and I can see that the call is connecting to the correct public folder because i can itterate through the properies once the query has executed. I am not sure what I am doing wrong here. If anyone has any ideas or comments the please let me know. Thanks Niall

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >