Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 421/1344 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • Using Coalesce

    - by Derek Dieter
    The coalesce function is used to find the first non-null value. The function takes limitless number of parameters in order to evaluate the first non null. If all the parameters are null, then COALESCE will also return a NULL value.-- hard coded example SELECT MyValue = COALESCE(NULL, NULL, 'abc', 123)The example above returns back [...]

    Read the article

  • MATLAB arbitrary code execution

    - by aristotle2600
    I am writing an automatic grader program under linux. There are several graders written in MATLAB, so I want to tie them all together and let students run a program to do an assignment, and have them choose the assignment. I am using a C++ main program, which then has mcc-compiled MATLAB libraries linked to it. Specifically, my program reads a config file for the names of the various matlab programs, and other information. It then uses that information to present choices to the student. So, If an assignment changes, is added or removed, then all you have to do is change the config file. The idea is that next, the program invokes the correct matlab library that has been compiled with mcc. But, that means that the libraries have to be recompiled if a grader gets changed. Worse, the whole program must be recompiled if a grader is added or removed. So, I would like one, simple, unchanging matlab library function to call the grader m-files directly. I currently have such a library, that uses eval on a string passed to it from the main program. The problem is that when I do this, apparently, mcc absorbs the grader m-code into itself; changing the grader m code after compilation has no effect. I would like for this not to happen. It was brought to my attention that Mathworks may not want me to be able to do this, since it could bypass matlab entirely. That is not my intention, and I would be happy with a solution that requires a full matlab install. My possible solutions are to use a mex file for the main program, or have the main program call a mcc library, which then calls a mex file, which then calls the proper grader. The reason I am hesitant about the first solution is that I'm not sure how many changes I would have to make to my code to make it work; my code is C++, not C, which I think makes things more complicated. The 2nd solution, though, may just be more complicated and ultimately have the same problem. So, any thoughts on this situation? How should I do this?

    Read the article

  • How can i rewrite this query for faster execution

    - by sam
    SELECT s1.ID FROM binventory_ostemp s1 JOIN ( SELECT Cust_FkId, ProcessID, MAX(Service_Duration) AS duration FROM binventory_ostemp WHERE ProcessID='4d2d6068678bc' AND Overall_Rank IN ( SELECT MIN(Overall_Rank) FROM binventory_ostemp WHERE ProcessID='4d2d6068678bc' GROUP BY Cust_FkId ) GROUP BY Cust_FkId ) AS s2 ON s1.Cust_FkId = s2.Cust_FkId AND s1.ProcessID=s2.ProcessID AND s1.Service_Duration=s2.duration AND s1.ProcessID='4d2d6068678bc' GROUP BY s1.Cust_FkId It just goes away if there are more than 10K rows in that table. What it does is find rows for each customer who has min. of overall rank and in those max. of service duration for a given processid Table Data ID Cust_FkId Overall_Rank Service_Duration ProcessID 1 23 2 30 4d2d6068678bc 2 23 1 45 4d2d6068678bc 3 23 1 60 4d2d6068678bc 4 56 3 90 4d2d6068678bc 5 56 2 50 4d2d6068678bc 6 56 2 85 4d2d6068678bc

    Read the article

  • More Tables or More Databases?

    - by BuckWoody
    I got an e-mail from someone that has an interesting situation. He has 15,000 customers, and he asks if he should have a database for their data per customer. Without a LOT more data it’s impossible to say, of course, but there are some general concepts to keep in mind. Whenever you’re segmenting data, it’s all about boundary choices. You have not only boundaries around how big the data will get, but things like how many objects (tables, stored procedures and so on) that will be involved, if there are any cross-sections of data (do they share location or product information) and – very important – what are the security requirements? From the answer to these types of questions, you now have the choice of making multiple tables in a single database, or using multiple databases. A database carries some overhead – it needs a certain amount of memory for locking and so on. But it has a very clean boundary – everything from objects to security can be kept apart. Having multiple users in the same database is possible as well, using things like a Schema. But keeping 15,000 schemas can be challenging as well. My recommendation in complex situations like this is similar to a post on decisions that I did earlier – I lay out the choices on a spreadsheet in rows, and then my requirements at the top in the columns. I  give each choice a number based on how well it meets each requirement. At the end, the highest number wins. And many times it’s a mix – perhaps this person could segment customers into larger regions or districts or products, in a database. Within that database might be multiple schemas for the customers. Of course, he needs to query across all customers, that becomes another requirement. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Element not found blocks execution in Selenium

    - by Mariano
    In my test, I try to verify if certain text exists (after an action) using find_element_by_xpath. If I use the right expression and my test pass, the routine ends correctly in no time. However if I try a wrong text (meaning that the test will fail) it hangs forever and I have to kill the script otherwise it does not end. Here is my test (the expression Thx user, client or password you entered is incorrect does not exist in the system, no matter what the user does): # -*- coding: utf-8 -*- import gettext import unittest from selenium import webdriver class TestWrongLogin(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.get("http://10.23.1.104:8888/") # let's check the language try: self.lang = self.driver.execute_script("return navigator.language;") self.lang = self.lang("-")[0] except: self.lang = "en" language = gettext.translation('app', '/app/locale', [self.lang], fallback=True) language.install() self._ = gettext.gettext def tearDown(self): self.driver.quit() def test_wrong_client(self): # test wrong client inputElement = self.driver.find_element_by_name("login") inputElement.send_keys("root") inputElement = self.driver.find_element_by_name("client") inputElement.send_keys("Unleash") inputElement = self.driver.find_element_by_name("password") inputElement.send_keys("qwerty") self.driver.find_element_by_name("form.submitted").click() # wait for the db answer self.driver.implicitly_wait(10) ret = self.driver.find_element_by_xpath( "//*[contains(.,'{0}')]".\ format(self._(u"Thx user, client or password you entered is incorrect"))) self.assertTrue(isinstance(ret, webdriver.remote.webelement.WebElement)) if __name__ == '__main__': unittest.main() Why does it do that and how can I prevent it?

    Read the article

  • SQL Server 2012 Integration Services - Using PowerShell to Configure Project Environments

    Continuing our discussion on how to leverage the capabilities of PowerShell to automate the most basic SSIS management tasks, this article will explore more complex topics by demonstrating the use of PowerShell in implementing and utilizing project environments. ‘Disturbing Development’Grant Fritchey & the DBA Team present the latest installment of the Top 5 hard-earned lessons of a DBA – read it now

    Read the article

  • SQL Server Integration Services Connection Manager Tips and Tricks

    In this article, we will take a look at the following Tips and Tricks for Connection Managers: Adding an "Application Name" property to the connection string; Creating Two Connection Managers for each Database Connection; and Capturing Connection Manager details in Package Configurations Deployment Manager 2 is now free!The new version includes tons of new features and we've launched a completely free Starter Edition! Get Deployment Manager here

    Read the article

  • Between-request Garbage Collection using Passenger

    - by raphaelcm
    We're using Rails 3.0.7 and REE 1.8.7. Long-term, we will be upgrading, but at the moment it's not feasible. Following the advice of several blog posts, we've been tuning our GC, and have settings that work pretty well. But we would really like to run GC outside of the request-response cycle. I've tried patching Passenger per this post, and using the code supplied in this SO question. In both cases, GC does indeed happen between requests. However, every time the between-request GC happens, I see a bunch of this: MONGODB [INFO] Connecting... MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) MONGODB admin['$cmd'].find({:ismaster=>1}).limit(-1) Starting the New Relic Agent. Installed New Relic Browser Monitoring middleware SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` = 'pages' AND `refinery_settings`.`name` = 'use_marketable_urls' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 1 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \"false\"\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 1 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT SQL (0.0ms) SHOW TABLES RefinerySetting Load (4.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`scoping` IS NULL AND `refinery_settings`.`name` = 'user_image_sizes' LIMIT 1 SQL (0.0ms) BEGIN RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` WHERE `refinery_settings`.`id` = 17 LIMIT 1 AREL (0.0ms) UPDATE `refinery_settings` SET `value` = '--- \n:small: 120x120>\n:medium: 280x280>\n:large: 580x580>\n', `callback_proc_as_string` = NULL WHERE `refinery_settings`.`id` = 17 SQL (0.0ms) SHOW TABLES RefinerySetting Load (0.0ms) SELECT `refinery_settings`.* FROM `refinery_settings` SQL (0.0ms) COMMIT ******** Engine Extend: app/helpers/blog_posts_helper SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES ******** Engine Extend: app/models/user SQL (0.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (4.0ms) describe `roles_users` SQL (0.0ms) SHOW TABLES SQL (4.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES SQL (0.0ms) SHOW TABLES (etc, etc, etc) Which is what happens when rails "loads the world" when the app starts up. Basically, GC.start is re-loading the app for some reason. Because of this, between-request GC is much slower than inline GC. Is there a way around this? I would love to have snappy, between-request GC if possible. Thanks.

    Read the article

  • Loading city/state from SQL Server to Google Maps?

    - by knawlejj
    I'm trying to make a small application that takes a city & state and geocodes that address to a lat/long location. Right now I am utilizing Google Map's API, ColdFusion, and SQL Server. Basically the city and state fields are in a database table and I want to take those locations and get marker put on a Google Map showing where they are. This is my code to do the geocoding, and viewing the source of the page shows that it is correctly looping through my query and placing a location ("Omaha, NE") in the address field, but no marker, or map for that matter, is showing up on the page: function codeAddress() { <cfloop query="GetLocations"> var address = document.getElementById(<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>).value; if (geocoder) { geocoder.geocode( {<cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput>: address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location, title: <cfoutput>#Trim(hometown)#,#Trim(state)#</cfoutput> }); } else { alert("Geocode was not successful for the following reason: " + status); } }); } </cfloop> } And here is the code to initialize the map: var geocoder; var map; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.4167,-90.4290); var myOptions = { zoom: 5, center: latlng, mapTypeId: google.maps.MapTypeId.ROADMAP } var marker = new google.maps.Marker({ position: latlng, map: map, title: "Test" }); map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); } I do have a map working that uses lat/long that was hard coded into the database table, but I want to be able to just use the city/state and convert that to a lat/long. Any suggestions or direction? Storing the lat/long in the database is also possible, but I don't know how to do that within SQL.

    Read the article

  • Drupal hook_cron execution order

    - by LanguaFlash
    Does anyone know off hand what order Drupal executes it's _cron hooks? It is important for a certain custom module I am developing and can't seem to find any documentation on it on the web. Maybe I'm searching for the wrong thing! Any help? Jeff

    Read the article

  • How to handle recurring execution?

    - by ShaneC
    I am trying to validate the solution I came up for what I think is a fairly typical problem. I have a service running and every 10 minutes it should do something. I've ended up with the following: private bool isRunning = true; public void Execute() { while(isRunning) { if(isRunning) { DoSomething(); m_AutoResetEvent.WaitOne(new Timespan(0, 10, 0)); } } } public void Stop() { isRunning = false; m_AutoResetEvent.Set(); } The immediate potential problems I can see is that I'm not doing any sort of locking around the isRunning modification in Stop() which gets called by another thread but I'm not sure I really need to? The worst that I think could happen is that it runs one extra cycle. Beyond that are there any obvious problems with this code? Is there a better way to solve this problem that I'm unaware of?

    Read the article

  • In SQL Server what is most efficient way to compare records to other records for duplicates with in

    - by Glenn
    We have an SQL Server that gets daily imports of data files from clients. This data is interrelated and we are always scrubbing it and having to look for suspect duplicate records between these files. Finding and tagging suspect records can get pretty complicated. We use logic that requires some field values to be the same, allows some field values to differ, and allows a range to be specified for how different certain field values can be. The only way we've found to do it is by using a cursor based process, and it places a heavy burden on the database. So I wanted to ask if there's a more efficient way to do this. I've heard it said that there's almost always a more efficient way to replace cursors with clever JOINS. But I have to admit I'm having a lot of trouble with this one. For a concrete example suppose we have 1 table, an "orders" table, with the following 6 fields. order_id, customer_id product_id, quantity, sale_date, price We want to look through the records to find suspect duplicates on the following example criteria. These get increasingly harder. 1. Records that have the same product_id, sale_date, and quantity but different customer_id's should be marked as suspect duplicates for review. 2. Records that have the same customer_id, product_id, quantity and have sale_dates within five days of each other should be marked as suspect duplicates for review 3. Records that have the same customer_id, product_id, but different quantities within 20 units, and sales dates within five days of each other should be considered suspect. Is it possible to satisfy each one of these criteria with a single SQL Query that uses JOINS? Is this the most efficient way to do this?

    Read the article

  • What strategy should be employed to access Facebook data offline?

    - by user686021
    I'm working on a project similar to Klout which provides detail about how you influence other people and who influenced you. We'll be fetching data from few social networking sites (i.e linked in, facebook, twitter etc) to analyze how users interacts with one another. For that we need to parse the data and store it in db and have to analyze it so that strength of relation of two user can be decided. We'll be accessing data offline as well to provide them with accurate results. If we consider facebook activities, we need to have access to Facebook users' news feed, wall data which includes likes,comments,shares etc. To decide how one user influence other, we'll store all the data and analyze it. I need suggestions on what steps need to be taken for great performance. We'll be using ASP.Net(C#) Web forms, SQL Server, jQuery. Main concern is parsing of data, it's storage and retrieval with least overhead. For that I've summarized few points as below : Should we switch over to document-oriented database, like MongoDB or RavenDB for the whole app or part of it even though none of team member have experience with them? Should we use SQL Server Analysis service? Is there any other library than Json.NET for parsing data? Is it advisable to use any C# library over FQL + GET Request ? I've tried to provide as much info as possible. Please share your views for the same.

    Read the article

  • How to get a debug flow of execution in C++

    - by Rich
    Hi, I work on a global trading system which supports many users. Each user can book,amend,edit,delete trades. The system is regulated by a central deal capture service. The deal capture service informs all the user of any updates that occur. The problem comes when we have crashes, as the production environment is impossible to re-create on a test system, I have to rely on crash dumps and log files. However this doesn't tell me what the user has been doing. I'd like a system that would (at the time of crashing) dump out a history of what the user has been doing. Anything that I add has to go into the live environment so it can't impact performance too much. Ideas wise I was thinking of a MACRO at the top of each function which acted like a stack trace (only I could supply additional user information, like trade id's, user dialog choices, etc ..) The system would record stack traces (on a per thread basis) and keep a history in a cyclic buffer (varying in size, depending on how much history you wanted to capture). Then on crash, I could dump this history stack. I'd really like to hear if anyone has a better solution, or if anyone knows of an existing framework? Thanks Rich

    Read the article

  • C : Memory layout of C program execution

    - by pavun_cool
    Hi All , I wanted know how the kernel is providing memory for simple C program . For example : #include<stdio.h> #include<malloc.h> int my_global = 10 ; main() { char *str ; static int val ; str = ( char *) malloc ( 100 ) ; scanf ( "%s" , str ) ; printf( " val:%s\n",str ) ; } See, In this program I have used static , global and malloc for allocating dynamic memory So , how the memory lay out will be ... ? Any one give me url , which will have have details information about this process..

    Read the article

  • Sql Server 2005 Database Tables - Row Comparison Column By Column.

    - by Goober
    Scenario I have an TWO datbase tables of exactly the SAME STRUCTURE. The difference between these tables is that one contains data populated by one application and the other is populated by a different application. Each application is trying to produce the same result, but using two different methods of implementation. Proposed Idea What I want to do, is run both applications, which will roughly produce 35000 rows containing 10 columns each - So all in all, 70000 rows of data, I then want to compare each row of data, COLUMN BY COLUMN to check whether the values are the same or not. Current Thoughts Since there is so much data to compare, I feel that the best way in which to do this would be to write an application, preferably in C# (but if necessary, T-sql), to compare each row of data column by column, and write out any failed comparisons to a text log file. Question Could anybody suggest an efficient way in which to perform column by column row comparison for 70000 rows worth of data? I'm struggling for ideas on how to tackle this problem. Extra Detail The two applications are both written in C# .Net 3.5. The Database is running on Sql Server 2005. Help greatly appreciated.

    Read the article

  • Shared Datasets in SQL Server 2008 R2

    This article leverages the examples and concepts explained in the Part I through Part IV of the spatial data series which develops a "BI-Satellite" app. Overview In the spatial data series we ... [Read Full Article]

    Read the article

  • What is the fastest way to get a DataTable into SQL Server?

    - by John Gietzen
    I have a DataTable in memory that I need to dump straight into a SQL Server temp table. After the data has been inserted, I transform it a little bit, and then insert a subset of those records into a permanent table. The most time consuming part of this operation is getting the data into the temp table. Now, I have to use temp tables, because more than one copy of this app is running at once, and I need a layer of isolation until the actual insert into the permanent table happens. What is the fastest way to do a bulk insert from a C# DataTable into a SQL Temp Table? I can't use any 3rd party tools for this, since I am transforming the data in memory. My current method is to create a parameterized SqlCommand: INSERT INTO #table (col1, col2, ... col200) VALUES (@col1, @col2, ... @col200) and then for each row, clear and set the parameters and execute. There has to be a more efficient way. I'm able to read and write the records on disk in a matter of seconds...

    Read the article

  • For-Loop and LINQ's deferred execution don't play well together

    - by Tim Schmelter
    The title suggests that i've already an idea what's going on, but i cannot explain it. I've tried to order a List<string[]> dynamically by each "column", beginning with the first and ending with the minimum Length of all arrays. So in this sample it is 2, because the last string[] has only two elements: List<string[]> someValues = new List<string[]>(); someValues.Add(new[] { "c", "3", "b" }); someValues.Add(new[] { "a", "1", "d" }); someValues.Add(new[] { "d", "4", "a" }); someValues.Add(new[] { "b", "2" }); Now i've tried to order all by the first and second column. I could do it statically in this way: someValues = someValues .OrderBy(t => t[0]) .ThenBy(t => t[1]) .ToList(); But if i don't know the number of "columns" i could use this loop(that's what I thought): int minDim = someValues.Min(t => t.GetLength(0)); // 2 IOrderedEnumerable<string[]> orderedValues = someValues.OrderBy(t => t[0]); for (int i = 1; i < minDim; i++) { orderedValues = orderedValues.ThenBy(t => t[i]); } someValues = orderedValues.ToList(); // IndexOutOfRangeException But that doesn't work, it fails with an IndexOutOfRangeException at the last line. The debugger tells me that i is 2 at that time, so the for-loop condition seems to be ignored, i is already == minDim. Why is that so?

    Read the article

  • How do you bind SQL Data to a .NET DataGridView?

    - by Jordan S
    I am trying to bind a table in an SQL database to a DataGridView Control. I would like to make it so that when the user enters a new line of data in the DataGridView that a record is automatically added to the database. Is there a way to do this using LINQ to SQL? I have tried using the code below but after I add a new entry I dont think the data gets added to the DB. Please Help! BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs; I tried adding DB.SubmitChanges() to the CellValueChanged eventhandler and that partially works. If I click the bottom empty row it automatically fills in the ID (identity) column of the table with a "0" instead of the next unused value. If I change that value manually to the next available then it adds the new record fine but if I leave it at 0 it does nothing. How can i fix this?

    Read the article

  • Avoid External Dependencies in SQL Server Triggers

    I sometimes want to perform auditing or other actions in a trigger based on some criteria. More specifically, there are a few cases that may warrant an e-mail; for example, if a web sale takes place that requires custom or overnight shipping and handling. It is tempting to just add code to the trigger that sends an e-mail when these criteria are met. But this can be problematic for two reasons: (1) your users are waiting for that processing to occur, and (2) if you can't send the e-mail, how do you decide whether or not to roll back the transaction, and how do you bring the problem to the attention of the administrator?

    Read the article

  • LinkedList.contains execution speed

    - by Le_Coeur
    Why Methode LinkedList.contains() runs quickly than such implementation: for (String s : list) if (s.equals(element)) return true; return false; I don't see great difference between this to implementations(i consider that search objects aren't nulls), same iterator and equals operation

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >