Search Results

Search found 49435 results on 1978 pages for 'query string'.

Page 608/1978 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • Ubuntu Software Center starts, then crashes before fully loaded [closed]

    - by Nathan Weisser
    Possible Duplicate: Software center not opening I am brand new to Linux and Ubuntu, and I couldn't install GIMP without the software center. I looked up earlier how to fix it, and it said to fix my sources list, and I did, but now i get a new error in the terminal. 2012-08-14 15:29:08,941 - softwarecenter.ui.gtk3.app - INFO - setting up proxy 'None' 2012-08-14 15:29:08,954 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-08-14 15:29:09,407 - softwarecenter.ui.gtk3.app - INFO - building local database 2012-08-14 15:29:09,408 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() 2012-08-14 15:29:17,308 - softwarecenter.db.update - WARNING - Problem creating rebuild path '/var/cache/software-center/xapian_rb'. 2012-08-14 15:29:17,309 - softwarecenter.db.update - WARNING - Please check you have the relevant permissions. 2012-08-14 15:29:17,309 - softwarecenter.db.database - INFO - open() database: path=None use_axi=True use_agent=True 2012-08-14 15:29:18,039 - softwarecenter.backend.reviews - WARNING - Could not get usefulness from server, no username in config file 2012-08-14 15:29:18,431 - softwarecenter.ui.gtk3.app - INFO - show_available_packages: search_text is '', app is None. 2012-08-14 15:29:19,153 - softwarecenter.db.pkginfo_impl.aptcache - INFO - aptcache.open() Traceback (most recent call last): File "/usr/bin/software-center", line 176, in <module> app.run(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1422, in run self.show_available_packages(args) File "/usr/share/software-center/softwarecenter/ui/gtk3/app.py", line 1352, in show_available_packages self.view_manager.set_active_view(ViewPages.AVAILABLE) File "/usr/share/software-center/softwarecenter/ui/gtk3/session/viewmanager.py", line 154, in set_active_view view_widget.init_view() File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/availablepane.py", line 136, in init_view SoftwarePane.init_view(self) File "/usr/share/software-center/softwarecenter/ui/gtk3/panes/softwarepane.py", line 215, in init_view self.icons, self.show_ratings) File "/usr/share/software-center/softwarecenter/ui/gtk3/views/appview.py", line 69, in __init__ self.helper = AppPropertiesHelper(db, cache, icons) File "/usr/share/software-center/softwarecenter/ui/gtk3/models/appstore2.py", line 109, in __init__ softwarecenter.paths.APP_INSTALL_PATH) File "/usr/share/software-center/softwarecenter/db/categories.py", line 255, in parse_applications_menu category = self._parse_menu_tag(child) File "/usr/share/software-center/softwarecenter/db/categories.py", line 444, in _parse_menu_tag query = self._parse_include_tag(element) File "/usr/share/software-center/softwarecenter/db/categories.py", line 402, in _parse_include_tag xapian.Query.OP_AND) File "/usr/share/software-center/softwarecenter/db/categories.py", line 341, in _parse_and_or_not_tag operator_elem, xapian.Query(), xapian.Query.OP_OR) File "/usr/share/software-center/softwarecenter/db/categories.py", line 385, in _parse_and_or_not_tag q = self.db.xapian_parser.parse_query(s, File "/usr/share/software-center/softwarecenter/db/database.py", line 174, in xapian_parser xapian_parser = self._get_new_xapian_parser() File "/usr/share/software-center/softwarecenter/db/database.py", line 200, in _get_new_xapian_parser xapian_parser.set_database(self.xapiandb) File "/usr/share/software-center/softwarecenter/db/database.py", line 166, in xapiandb self._db_per_thread[thread_name] = self._get_new_xapiandb() File "/usr/share/software-center/softwarecenter/db/database.py", line 179, in _get_new_xapiandb xapiandb = xapian.Database(self._db_pathname) File "/usr/lib/python2.7/dist-packages/xapian/__init__.py", line 3666, in __init__ _xapian.Database_swiginit(self,_xapian.new_Database(*args)) xapian.DatabaseOpeningError: Couldn't detect type of database I'm not sure how to fix the errors, and I couldn't find a topic on them anywhere. Be nice, because I am a two-day old Linux user :/ Tell me if you need my Sources list

    Read the article

  • New DMV… not yet

    - by Michael Zilberstein
    Downloaded and installed new toy: And while reading BOL, stumbled upon new extremely useful DMV: sys.dm_exec_query_profiles . This DMV enables DBA to monitor query progress while it is being executed. Counters in the DMV are per operation per thread. So we’ll be able to monitor in real time which thread (even for parallel processing) processes which node in the plan. Or find heavy operations “post mortem”. We all know the uncomfortable feeling when some heavy query runs and the boss starts asking...(read more)

    Read the article

  • SQL Monitor’s data repository

    - by Chris Lambrou
    As one of the developers of SQL Monitor, I often get requests passed on by our support people from customers who are looking to dip into SQL Monitor’s own data repository, in order to pull out bits of information that they’re interested in. Since there’s clearly interest out there in playing around directly with the data repository, I thought I’d write some blog posts to start to describe how it all works. The hardest part for me is knowing where to begin, since the schema of the data repository is pretty big. Hmmm… I guess it’s tricky for anyone to write anything but the most trivial of queries against the data repository without understanding the hierarchy of monitored objects, so perhaps my first post should start there. I always imagine that whenever a customer fires up SSMS and starts to explore their SQL Monitor data repository database, they become immediately bewildered by the schema – that was certainly my experience when I did so for the first time. The following query shows the number of different object types in the data repository schema: SELECT type_desc, COUNT(*) AS [count] FROM sys.objects GROUP BY type_desc ORDER BY type_desc;  type_desccount 1DEFAULT_CONSTRAINT63 2FOREIGN_KEY_CONSTRAINT181 3INTERNAL_TABLE3 4PRIMARY_KEY_CONSTRAINT190 5SERVICE_QUEUE3 6SQL_INLINE_TABLE_VALUED_FUNCTION381 7SQL_SCALAR_FUNCTION2 8SQL_STORED_PROCEDURE100 9SYSTEM_TABLE41 10UNIQUE_CONSTRAINT54 11USER_TABLE193 12VIEW124 With 193 tables, 124 views, 100 stored procedures and 381 table valued functions, that’s quite a hefty schema, and when you browse through it using SSMS, it can be a bit daunting at first. So, where to begin? Well, let’s narrow things down a bit and only look at the tables belonging to the data schema. That’s where all of the collected monitoring data is stored by SQL Monitor. The following query gives us the names of those tables: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' ORDER BY sch.name, obj.name; This query still returns 110 tables. I won’t show them all here, but let’s have a look at the first few of them:  name 1data.Cluster_Keys 2data.Cluster_Machine_ClockSkew_UnstableSamples 3data.Cluster_Machine_Cluster_StableSamples 4data.Cluster_Machine_Keys 5data.Cluster_Machine_LogicalDisk_Capacity_StableSamples 6data.Cluster_Machine_LogicalDisk_Keys 7data.Cluster_Machine_LogicalDisk_Sightings 8data.Cluster_Machine_LogicalDisk_UnstableSamples 9data.Cluster_Machine_LogicalDisk_Volume_StableSamples 10data.Cluster_Machine_Memory_Capacity_StableSamples 11data.Cluster_Machine_Memory_UnstableSamples 12data.Cluster_Machine_Network_Capacity_StableSamples 13data.Cluster_Machine_Network_Keys 14data.Cluster_Machine_Network_Sightings 15data.Cluster_Machine_Network_UnstableSamples 16data.Cluster_Machine_OperatingSystem_StableSamples 17data.Cluster_Machine_Ping_UnstableSamples 18data.Cluster_Machine_Process_Instances 19data.Cluster_Machine_Process_Keys 20data.Cluster_Machine_Process_Owner_Instances 21data.Cluster_Machine_Process_Sightings 22data.Cluster_Machine_Process_UnstableSamples 23… There are two things I want to draw your attention to: The table names describe a hierarchy of the different types of object that are monitored by SQL Monitor (e.g. clusters, machines and disks). For each object type in the hierarchy, there are multiple tables, ending in the suffixes _Keys, _Sightings, _StableSamples and _UnstableSamples. Not every object type has a table for every suffix, but the _Keys suffix is especially important and a _Keys table does indeed exist for every object type. In fact, if we limit the query to return only those tables ending in _Keys, we reveal the full object hierarchy: SELECT sch.name + '.' + obj.name AS [name] FROM sys.objects obj JOIN sys.schemas sch ON sch.schema_id = obj.schema_id WHERE obj.type_desc = 'USER_TABLE' AND sch.name = 'data' AND obj.name LIKE '%_Keys' ORDER BY sch.name, obj.name;  name 1data.Cluster_Keys 2data.Cluster_Machine_Keys 3data.Cluster_Machine_LogicalDisk_Keys 4data.Cluster_Machine_Network_Keys 5data.Cluster_Machine_Process_Keys 6data.Cluster_Machine_Services_Keys 7data.Cluster_ResourceGroup_Keys 8data.Cluster_ResourceGroup_Resource_Keys 9data.Cluster_SqlServer_Agent_Job_History_Keys 10data.Cluster_SqlServer_Agent_Job_Keys 11data.Cluster_SqlServer_Database_BackupType_Backup_Keys 12data.Cluster_SqlServer_Database_BackupType_Keys 13data.Cluster_SqlServer_Database_CustomMetric_Keys 14data.Cluster_SqlServer_Database_File_Keys 15data.Cluster_SqlServer_Database_Keys 16data.Cluster_SqlServer_Database_Table_Index_Keys 17data.Cluster_SqlServer_Database_Table_Keys 18data.Cluster_SqlServer_Error_Keys 19data.Cluster_SqlServer_Keys 20data.Cluster_SqlServer_Services_Keys 21data.Cluster_SqlServer_SqlProcess_Keys 22data.Cluster_SqlServer_TopQueries_Keys 23data.Cluster_SqlServer_Trace_Keys 24data.Group_Keys The full object type hierarchy looks like this: Cluster Machine LogicalDisk Network Process Services ResourceGroup Resource SqlServer Agent Job History Database BackupType Backup CustomMetric File Table Index Error Services SqlProcess TopQueries Trace Group Okay, but what about the individual objects themselves represented at each level in this hierarchy? Well that’s what the _Keys tables are for. This is probably best illustrated by way of a simple example – how can I query my own data repository to find the databases on my own PC for which monitoring data has been collected? Like this: SELECT clstr._Name AS cluster_name, srvr._Name AS instance_name, db._Name AS database_name FROM data.Cluster_SqlServer_Database_Keys db JOIN data.Cluster_SqlServer_Keys srvr ON db.ParentId = srvr.Id -- Note here how the parent of a Database is a Server JOIN data.Cluster_Keys clstr ON srvr.ParentId = clstr.Id -- Note here how the parent of a Server is a Cluster WHERE clstr._Name = 'dev-chrisl2' -- This is the hostname of my own PC ORDER BY clstr._Name, srvr._Name, db._Name;  cluster_nameinstance_namedatabase_name 1dev-chrisl2SqlMonitorData 2dev-chrisl2master 3dev-chrisl2model 4dev-chrisl2msdb 5dev-chrisl2mssqlsystemresource 6dev-chrisl2tempdb 7dev-chrisl2sql2005SqlMonitorData 8dev-chrisl2sql2005TestDatabase 9dev-chrisl2sql2005master 10dev-chrisl2sql2005model 11dev-chrisl2sql2005msdb 12dev-chrisl2sql2005mssqlsystemresource 13dev-chrisl2sql2005tempdb 14dev-chrisl2sql2008SqlMonitorData 15dev-chrisl2sql2008master 16dev-chrisl2sql2008model 17dev-chrisl2sql2008msdb 18dev-chrisl2sql2008mssqlsystemresource 19dev-chrisl2sql2008tempdb These results show that I have three SQL Server instances on my machine (a default instance, one named sql2005 and one named sql2008), and each instance has the usual set of system databases, along with a database named SqlMonitorData. Basically, this is where I test SQL Monitor on different versions of SQL Server, when I’m developing. There are a few important things we can learn from this query: Each _Keys table has a column named Id. This is the primary key. Each _Keys table has a column named ParentId. A foreign key relationship is defined between each _Keys table and its parent _Keys table in the hierarchy. There are two exceptions to this, Cluster_Keys and Group_Keys, because clusters and groups live at the root level of the object hierarchy. Each _Keys table has a column named _Name. This is used to uniquely identify objects in the table within the scope of the same shared parent object. Actually, that last item isn’t always true. In some cases, the _Name column is actually called something else. For example, the data.Cluster_Machine_Services_Keys table has a column named _ServiceName instead of _Name (sorry for the inconsistency). In other cases, a name isn’t sufficient to uniquely identify an object. For example, right now my PC has multiple processes running, all sharing the same name, Chrome (one for each tab open in my web-browser). In such cases, multiple columns are used to uniquely identify an object within the scope of the same shared parent object. Well, that’s it for now. I’ve given you enough information for you to explore the _Keys tables to see how objects are stored in your own data repositories. In a future post, I’ll try to explain how monitoring data is stored for each object, using the _StableSamples and _UnstableSamples tables. If you have any questions about this post, or suggestions for future posts, just submit them in the comments section below.

    Read the article

  • Dependency injection with n-tier Entity Framework solution

    - by Matthew
    I am currently designing an n-tier solution which is using Entity Framework 5 (.net 4) as its data access strategy, but am concerned about how to incorporate dependency injection to make it testable / flexible. My current solution layout is as follows (my solution is called Alcatraz): Alcatraz.WebUI: An asp.net webform project, the front end user interface, references projects Alcatraz.Business and Alcatraz.Data.Models. Alcatraz.Business: A class library project, contains the business logic, references projects Alcatraz.Data.Access, Alcatraz.Data.Models Alcatraz.Data.Access: A class library project, houses AlcatrazModel.edmx and AlcatrazEntities DbContext, references projects Alcatraz.Data.Models. Alcatraz.Data.Models: A class library project, contains POCOs for the Alcatraz model, no references. My vision for how this solution would work is the web-ui would instantiate a repository within the business library, this repository would have a dependency (through the constructor) of a connection string (not an AlcatrazEntities instance). The web-ui would know the database connection strings, but not that it was an entity framework connection string. In the Business project: public class InmateRepository : IInmateRepository { private string _connectionString; public InmateRepository(string connectionString) { if (connectionString == null) { throw new ArgumentNullException("connectionString"); } EntityConnectionStringBuilder connectionBuilder = new EntityConnectionStringBuilder(); connectionBuilder.Metadata = "res://*/AlcatrazModel.csdl|res://*/AlcatrazModel.ssdl|res://*/AlcatrazModel.msl"; connectionBuilder.Provider = "System.Data.SqlClient"; connectionBuilder.ProviderConnectionString = connectionString; _connectionString = connectionBuilder.ToString(); } public IQueryable<Inmate> GetAllInmates() { AlcatrazEntities ents = new AlcatrazEntities(_connectionString); return ents.Inmates; } } In the Web UI: IInmateRepository inmateRepo = new InmateRepository(@"data source=MATTHEW-PC\SQLEXPRESS;initial catalog=Alcatraz;integrated security=True;"); List<Inmate> deathRowInmates = inmateRepo.GetAllInmates().Where(i => i.OnDeathRow).ToList(); I have a few related questions about this design. 1) Does this design even make sense in terms of Entity Frameworks capabilities? I heard that Entity framework uses the Unit-of-work pattern already, am I just adding another layer of abstract unnecessarily? 2) I don't want my web-ui to directly communicate with Entity Framework (or even reference it for that matter), I want all database access to go through the business layer as in the future I will have multiple projects using the same business layer (web service, windows application, etc.) and I want to have it easy to maintain / update by having the business logic in one central area. Is this an appropriate way to achieve this? 3) Should the Business layer even contain repositories, or should that be contained within the Access layer? If where they are is alright, is passing a connection string a good dependency to assume? Thanks for taking the time to read!

    Read the article

  • Sending Parameters with the BizTalk HTTP Adapter

    - by Christopher House
    I've never had occaison to use the BizTalk HTTP adapter since I've always needed SOAP rather than just POX (plain old XML).  Yesterday we decided that we're going to expose some data via a Java servlet that will accept an HTTP post and respond with POX.  I knew BizTalk had an HTTP adapter but I had no idea what it's capabilities were. After a quick read through the BizTalk docs, it was apparent that the HTTP send adapter does in fact do posts.  The concern I had though was how we were going to supply parameters to the servlet.  The examples I had seen using the HTTP adapter all involved posting an XML message to some HTTP location.  Our Java guy, however didn't want to take that approach.  He wanted us to provide a query string via post, much like you'd expect to see on an HTTP get.  I decided to put together a little test scenario and see what I could come up with.  We didn't have a test servlet I could go against and my Java experience is virtually nill, so I decided to put together an ASP.Net project to act as the servlet.  It didn't need to be fancy, just one HttpHandler that accepts a post, reads a parameter and returns XML.  With the HttpHandler done, I put together a simple orchestration to send a message to the handler.  I started by having the orch send a message of type System.String to see what it would look like when the handler received it. I set a breakpoint in my handler and kicked off the orchestration.  Below is what I saw: As I suspected, because of BizTalk's XML serialization, System.String was not going to work.  I thought back to my BizTalk 2004 days and I project I worked on that required sending HTML formatted emails via the SMTP adapter.  To acomplish that, I had used a .Net class with a custom serialization formatter that I got from a Microsoft sample.  The code for the class, RawString can be found here. I created a new class library with the RawString class as well as a static factory class, referenced that in my orchestration project and changed my message type from System.String to RawString.  Below is what the code in my message construction looks like: After deploying the updated orchestration, I fired it off again and checked the breakpoint in my HttpHandler.  This is what I saw: And there you have it.  The RawString message type allowed me to pass a query string in the HTTP post without wrapping it in XML.

    Read the article

  • Google I/O 2011: Querying Freebase: Get More From MQL

    Google I/O 2011: Querying Freebase: Get More From MQL Jamie Taylor Freebase's query language, MQL, lets you access data about more than 20 million curated entities and the connections between them. Level up your Freebase query skills with advanced syntax, optimisation tricks, schema introsopection, metaschema, and more. From: GoogleDevelopers Views: 2007 15 ratings Time: 46:49 More in Science & Technology

    Read the article

  • ATI Radeon HD 4650 AGP Video card not recognized properly

    - by PastorLarry
    I have an ASUS ATI Radeon HD 4650 AGP in this system (yeah, I know how old it is). I've been on Ubuntu since 10.04, and the system has never properly recognized the card. I have always had the VESA drivers installed. Now that I have the time to address the problem, 12.04 was listing the card as "Unknown" under the System Settings. Meanwhile, Sysinfo recognizes the card as: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 0028 So I know that this card should be using the radeon driver (or even the radeonhd driver). However, when I installed the mesa-utils package, the card is suddenly reported as: Gallium 0.4 on llvmpipe (LLVM 0x300) So now, I'm completely at a loss. It seems that the llvmpipe stuff has to do with OpenGL, but it still appears that I don't have the proper video driver installed. That being said, anyone know what I can do to force the system to recognize the card and use the radeon driver? [EDIT 05.28] I did look at some other information, including glxinfo and a couple of other commands (it was REALLY late, so I don't remember the other commands) and I got these: glxinfo | grep vendor: server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: X.org glxinfo | grep renderer: OpenGL renderer string: Gallium 0.4 on AMD RV730 One of the other commands gave a whole lot of info and near the end stated that the activation string for the radeon driver was "modprobe radeon". I've tried that from sudo and as root, but it doesn't seem to change anything. I'm at a complete loss. I've even added the xorg-edgers ppa to my Software Sources and updated and rebooted the system, but nothing has changed. Most of all, I can't seem to find any documentation on this issue, as it seems that it's assumed that the radeon driver will install automatically, no questions asked. I feel like such a newbie. Does anyone have any ideas on this? [edit 05.28] results of lsmod | grep radeon (in a more readable format than the comment below): radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon [edit 05.29] This is my /etc/X11/xorg.conf: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" So here is my question. Can I simply change the name of the driver in the device section to "radeon" instead of "fglrx" and have the radeon driver work? Or is ther a way to use this as a tmeplate and change the appropriate lines and activate the radeon driver through this file?

    Read the article

  • Parallel LINQ - PLINQ

    - by nmarun
    Turns out now with .net 4.0 we can run a query like a multi-threaded application. Say you want to query a collection of objects and return only those that meet certain conditions. Until now, we basically had one ‘control’ that iterated over all the objects in the collection, checked the condition on each object and returned if it passed. We obviously agree that if we can ‘break’ this task into smaller ones, assign each task to a different ‘control’ and ask all the controls to do their job - in-parallel, the time taken the finish the entire task will be much lower. Welcome to PLINQ. Let’s take some examples. I have the following method that uses our good ol’ LINQ. 1: private static void Linq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers =   from num in source 12: where IsDivisibleBy(num, 2) 13: select num; 14: 15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } I’ve added comments for the major steps, but the only thing I want to talk about here is the IsDivisibleBy() method. I know I could have just included the logic directly in the where clause. I called a method to add ‘delay’ to the execution of the query - to simulate a loooooooooong operation (will be easier to compare the results). 1: private static bool IsDivisibleBy(int number, int divisor) 2: { 3: // iterate over some database query 4: // to add time to the execution of this method; 5: // the TableB has around 10 records 6: for (int i = 0; i < 10; i++) 7: { 8: DataClasses1DataContext dataContext = new DataClasses1DataContext(); 9: var query = from b in dataContext.TableBs select b; 10: 11: foreach (var row in query) 12: { 13: // Do NOTHING (wish my job was like this) 14: } 15: } 16:  17: return number % divisor == 0; 18: } Now, let’s look at how to modify this to PLINQ. 1: private static void Plinq(int lowerLimit, int upperLimit) 2: { 3: // populate an array with int values from lowerLimit to the upperLimit 4: var source = Enumerable.Range(lowerLimit, upperLimit); 5:  6: // Start a timer 7: Stopwatch stopwatch = new Stopwatch(); 8: stopwatch.Start(); 9:  10: // set the expectation => build the expression tree 11: var evenNumbers = from num in source.AsParallel() 12: where IsDivisibleBy(num, 2) 13: select num; 14:  15: // iterate over and print the returned items 16: foreach (var number in evenNumbers) 17: { 18: Console.WriteLine(string.Format("** {0}", number)); 19: } 20:  21: stopwatch.Stop(); 22:  23: // check the metrics 24: Console.WriteLine(String.Format("Elapsed {0}ms", stopwatch.ElapsedMilliseconds)); 25: } That’s it, this is now in PLINQ format. Oh and if you haven’t found the difference, look line 11 a little more closely. You’ll see an extension method ‘AsParallel()’ added to the ‘source’ variable. Couldn’t be more simpler right? So this is going to improve the performance for us. Let’s test it. So in my Main method of the Console application that I’m working on, I make a call to both. 1: static void Main(string[] args) 2: { 3: // set lower and upper limits 4: int lowerLimit = 1; 5: int upperLimit = 20; 6: // call the methods 7: Console.WriteLine("Calling Linq() method"); 8: Linq(lowerLimit, upperLimit); 9: 10: Console.WriteLine(); 11: Console.WriteLine("Calling Plinq() method"); 12: Plinq(lowerLimit, upperLimit); 13:  14: Console.ReadLine(); // just so I get enough time to read the output 15: } YMMV, but here are the results that I got:    It’s quite obvious from the above results that the Plinq() method is taking considerably less time than the Linq() version. I’m sure you’ve already noticed that the output of the Plinq() method is not in order. That’s because, each of the ‘control’s we sent to fetch the results, reported with values as and when they obtained them. This is something about parallel LINQ that one needs to remember – the collection cannot be guaranteed to be undisturbed. This could be counted as a negative about PLINQ (emphasize ‘could’). Nevertheless, if we want the collection to be sorted, we can use a SortedSet (.net 4.0) or build our own custom ‘sorter’. Either way we go, there’s a good chance we’ll end up with a better performance using PLINQ. And there’s another negative of PLINQ (depending on how you see it). This is regarding the CPU cycles. See the usage for Linq() method (used ResourceMonitor): I have dual CPU’s and see the height of the peak in the bottom two blocks and now compare to what happens when I run the Plinq() method. The difference is obvious. Higher usage, but for a shorter duration (width of the peak). Both these points make sense in both cases. Linq() runs for a longer time, but uses less resources whereas Plinq() runs for a shorter time and consumes more resources. Even after knowing all these, I’m still inclined towards PLINQ. PLINQ rocks! (no hard feelings LINQ)

    Read the article

  • Using OpenQuery

    - by Derek Dieter
    The OPENQUERY command is used to initiate an ad-hoc distributed query using a linked-server. It is initiated by specifying OPENQUERY as the table name in the from clause. Essentially, it opens a linked server, then executes a query as if executing from that server. While executing queries directly and receiving data directly in this [...]

    Read the article

  • How to use XDMCP+GDM and Xnest?

    - by João Pinto
    I have been trying to enable XDMCP on GDM without much success. Following some instructions I have edited /etc/gdm/custom.conf and added: [daemon] RemoteGreeter=/usr/lib/gdm/gdm-xdmcp-chooser-slave [xdmcp] Enable=true Then restarted gdm and tried to connect both locally and from a remote system with: Xnest :1 -query localhost Xnest :1 -query remote_system_hostname I just get a black screen instead of the GDM window as expected. I am missing something ?

    Read the article

  • Passing text message to web page from web user control

    - by Narendra Tiwari
    Here is a brief summary how we can send a text message to webpage by a web user control. Delegates is the slolution. There are many good articles on .net delegates you can refer some of them below. The scenario is we want to send a text message to the page on completion of some activity on webcontrol. 1/ Create a Base class for webcontrol (refer code below), assuming we are passing some text messages to page from web user control  - Declare a delegate  - Declare an event of type delegate using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; //Declaring delegate with message parameter public delegate void SendMessageToThePageHandler(string messageToThePage); public         } class ControlBase: System.Web.UI.UserControl { public ControlBase() { // TODO: Add constructor logic here }protected override void OnInit(EventArgs e) { base.OnInit(e); }private string strMessageToPass;/// <summary> /// MessageToPass - Property to pass text message to page /// </summary> public string MessageToPass { get { return strMessageToPass; } set { strMessageToPass = value; } }/// <summary> /// SendMessageToPage - Called from control to invoke the event /// </summary> /// <param name="strMessage">Message to pass</param> public void SendMessageToPage(string strMessage) {   if (this.sendMessageToThePage != null)       this.sendMessageToThePage(strMessage); } 2/ Register events on webpage on page Load eventthis.AddControlEventHandler((ControlBase)WebUserControl1); this.AddControlEventHandler((ControlBase)WebUserControl2); /// <summary> /// AddControlEventHandler- Hooking web user control event /// </summary> /// <param name="ctrl"></param> private void AddControlEventHandler(ControlBase ctrl) { ctrl.sendMessageToThePage += delegate(string strMessage) {   //display message   lblMessage.Text = strMessage; }; } References: http://www.akadia.com/services/dotnet_delegates_and_events.html     3/

    Read the article

  • Increase application performance

    - by Prayos
    I'm writing a program for a company that will generate a daily report for them. All of the data that they use for this report is stored in a local SQLite database. For this report, the utilize pretty much every bit of the information in the database. So currently, when I query the datbase, I retrieve everything, and store the information in lists. Here's what I've got: using (var dataReader = _connection.Select(query)) { if (dataReader.HasRows) { while (dataReader.Read()) { _date.Add(Convert.ToDateTime(dataReader["date"])); _measured.Add(Convert.ToDouble(dataReader["measured_dist"])); _bit.Add(Convert.ToDouble(dataReader["bit_loc"])); _psi.Add(Convert.ToDouble(dataReader["pump_press"])); _time.Add(Convert.ToDateTime(dataReader["timestamp"])); _fob.Add(Convert.ToDouble(dataReader["force_on_bit"])); _torque.Add(Convert.ToDouble(dataReader["torque"])); _rpm.Add(Convert.ToDouble(dataReader["rpm"])); _pumpOneSpm.Add(Convert.ToDouble(dataReader["pump_1_strokes_pm"])); _pumpTwoSpm.Add(Convert.ToDouble(dataReader["pump_2_strokes_pm"])); _pullForce.Add(Convert.ToDouble(dataReader["pull_force"])); _gpm.Add(Convert.ToDouble(dataReader["flow"])); } } } I then utilize these lists for the calculations. Obviously, the more information that is in this database, the longer the initial query will take. I'm curious if there is a way to increase the performance of the query at all? Thanks for any and all help. EDIT One of the report rows is called Daily Drilling Hours. For this calculation, I use this method: // Retrieves the timestamps where measured depth == bit depth and PSI >= 50 public double CalculateDailyProjectDrillingHours(DateTime date) { var dailyTimeStamps = _time.Where((t, i) => _date[i].Equals(date) && _measured[i].Equals(_bit[i]) && _psi[i] >= 50).ToList(); return _dailyDrillingHours = Convert.ToDouble(Math.Round(TimeCalculations(dailyTimeStamps).TotalHours, 2, MidpointRounding.AwayFromZero)); } // Checks that the interval is less than 10, then adds the interval to the total time private static TimeSpan TimeCalculations(IList<DateTime> timeStamps) { var interval = new TimeSpan(0, 0, 10); var totalTime = new TimeSpan(); TimeSpan timeDifference; for (var j = 0; j < timeStamps.Count - 1; j++) { if (timeStamps[j + 1].Subtract(timeStamps[j]) <= interval) { timeDifference = timeStamps[j + 1].Subtract(timeStamps[j]); totalTime = totalTime.Add(timeDifference); } } return totalTime; }

    Read the article

  • SharePoint 2010 Data Retrival Techinques

    - by Jayant Sharma
    In SharePoint, we have two options to perform CRUD operation.1. using server side code2. using client side codeusing server side code, we have 1. CAML2. LINQusing client side code, we have 1. Client Object Model    1.1.      Managed Client Object Model     1.2.     Silverlight Client Object Model    1.3.     ECMA Client Object Model2. SharePoint Web Services3. ADO Data Service (based on REST Web Services)4. Using RPC Call (owssvr.dll)Which and when these options are used depend upon requirements. Every options are certain advantages and disadvantages. So, before start development of any new sharepoint project, it is important to understand the limitations of different methods.Server Object Model is used when our application is host on the same server on which sharepoint is installed. while Client Side code is used to access sharepoint from client system. In SharePoint 2010 specially Client Object Model (COM) are introduced to perform the sharepoint operations from client system. Advantage of CAML:    -  It is fast.    -  Can be use it from all kind of technology like Silverlight, or Jquery    -  You can use U2U CAML Query builder to generate CAML Query.Disadvantage Of CAML:    - Error Prone, as we can detect the error only at runtimeAdvantage of LINQ:    -  Object Oriented technique (Object Relation Model)    -  LINQ  to SharePoint provider are working with Strongly Type List Item Objects, So intellisence are present at runtime    -  No need of knowledge of CAML    -  Less Error Prone as it as it uses C# syntex.    -  You can compare two Fields of SharePoint ListDisadvantage Of LINQ:    -  List Attachment is not supported in SPMetal Tool    -  Created By, Created, Modified and Modified By Fields are not created by SPMetal Tool.    -  Custom fields are not created by SPMetal Tools    -  External Lists are not supported    -  Though at backend LINQ genenates CAML Query so it is slower than directly using CAML in Code.  Advantage of Client Object Model    -  Used to access sharepoint from client system    -  No WebServer is required at Client End    - Can use Silverlight and JavaScripts to make better and fast User experienceDisadvantage of Client Object Model    -  You cannot use RunwithEleveatedPrivilege    - Cross Site Collection query are not possible    - Lesser API's are availableADO.Net Data Services:    -  Only List based operations are possible, other type of operations are not possible.SharePoint Web Services and RPC Call:    - Previously it was used in SharePoint 2007 but after the introduction  of Client Object Model,  Microsoft recommends not to use Web Services to fetch data from SharePoint. In SharePoint 2010 it is avaliable only for backward compatibility.Ref: http://msdn.microsoft.com/en-us/library/ee539764Jayant Sharma

    Read the article

  • Getting Dynamic in SSIS Queries

    - by ejohnson2010
    When you start working with SQL Server and SSIS, it isn’t long before you find yourself wishing you could change bits of SQL queries dynamically. Most commonly, I see people that want to change the date portion of a query so that you can limit your query to the last 30 days, for example. This can be done using a combination of expressions and variables. I will do this in two parts, first I will build a variable that will always contain the 1 st day of the previous month and then I will dynamically...(read more)

    Read the article

  • Oracle Flashback Technologies - Overview

    - by Sridhar_R-Oracle
    Oracle Flashback Technologies - IntroductionIn his May 29th 2014 blog, my colleague Joe Meeks introduced Oracle Maximum Availability Architecture (MAA) and discussed both planned and unplanned outages. Let’s take a closer look at unplanned outages. These can be caused by physical failures (e.g., server, storage, network, file deletion, physical corruption, site failures) or by logical failures – cases where all components and files are physically available, but data is incorrect or corrupt. These logical failures are usually caused by human errors or application logic errors. This blog series focuses on these logical errors – what causes them and how to address and recover from them using Oracle Database Flashback. In this introductory blog post, I’ll provide an overview of the Oracle Database Flashback technologies and will discuss the features in detail in future blog posts. Let’s get started. We are all human beings (unless a machine is reading this), and making mistakes is a part of what we do…often what we do best!  We “fat finger”, we spill drinks on keyboards, unplug the wrong cables, etc.  In addition, many of us, in our lives as DBAs or developers, must have observed, caused, or corrected one or more of the following unpleasant events: Accidentally updated a table with wrong values !! Performed a batch update that went wrong - due to logical errors in the code !! Dropped a table !! How do DBAs typically recover from these types of errors? First, data needs to be restored and recovered to the point-in-time when the error occurred (incomplete or point-in-time recovery).  Moreover, depending on the type of fault, it’s possible that some services – or even the entire database – would have to be taken down during the recovery process.Apart from error conditions, there are other questions that need to be addressed as part of the investigation. For example, what did the data look like in the morning, prior to the error? What were the various changes to the row(s) between two timestamps? Who performed the transaction and how can it be reversed?  Oracle Database includes built-in Flashback technologies, with features that address these challenges and questions, and enable you to perform faster, easier, and convenient recovery from logical corruptions. HistoryFlashback Query, the first Flashback Technology, was introduced in Oracle 9i. It provides a simple, powerful and completely non-disruptive mechanism for data verification and recovery from logical errors, and enables users to view the state of data at a previous point in time.Flashback Technologies were further enhanced in Oracle 10g, to provide fast, easy recovery at the database, table, row, and even at a transaction level.Oracle Database 11g introduced an innovative method to manage and query long-term historical data with Flashback Data Archive. The 11g release also introduced Flashback Transaction, which provides an easy, one-step operation to back out a transaction. Oracle Database versions 11.2.0.2 and beyond further enhanced the performance of these features. Note that all the features listed here work without requiring any kind of restore operation.In addition, Flashback features are fully supported with the new multi-tenant capabilities introduced with Oracle Database 12c, Flashback Features Oracle Flashback Database enables point-in-time-recovery of the entire database without requiring a traditional restore and recovery operation. It rewinds the entire database to a specified point in time in the past by undoing all the changes that were made since that time.Oracle Flashback Table enables an entire table or a set of tables to be recovered to a point in time in the past.Oracle Flashback Drop enables accidentally dropped tables and all dependent objects to be restored.Oracle Flashback Query enables data to be viewed at a point-in-time in the past. This feature can be used to view and reconstruct data that was lost due to unintentional change(s) or deletion(s). This feature can also be used to build self-service error correction into applications, empowering end-users to undo and correct their errors.Oracle Flashback Version Query offers the ability to query the historical changes to data between two points in time or system change numbers (SCN) Oracle Flashback Transaction Query enables changes to be examined at the transaction level. This capability can be used to diagnose problems, perform analysis, audit transactions, and even revert the transaction by undoing SQLOracle Flashback Transaction is a procedure used to back-out a transaction and its dependent transactions.Flashback technologies eliminate the need for a traditional restore and recovery process to fix logical corruptions or make enquiries. Using these technologies, you can recover from the error in the same amount of time it took to generate the error. All the Flashback features can be accessed either via SQL command line (or) via Enterprise Manager.  Most of the Flashback technologies depend on the available UNDO to retrieve older data. The following table describes the various Flashback technologies: their purpose, dependencies and situations where each individual technology can be used.   Example Syntax Error investigation related:The purpose is to investigate what went wrong and what the values were at certain points in timeFlashback Queries  ( select .. as of SCN | Timestamp )   - Helps to see the value of a row/set of rows at a point in timeFlashback Version Queries  ( select .. versions between SCN | Timestamp and SCN | Timestamp)  - Helps determine how the value evolved between certain SCNs or between timestamps Flashback Transaction Queries (select .. XID=)   - Helps to understand how the transaction caused the changes.Error correction related:The purpose is to fix the error and correct the problems,Flashback Table  (flashback table .. to SCN | Timestamp)  - To rewind the table to a particular timestamp or SCN to reverse unwanted updates Flashback Drop (flashback table ..  to before drop )  - To undrop or undelete a table Flashback Database (flashback database to SCN  | Restore Point )  - This is the rewind button for Oracle databases. You can revert the entire database to a particular point in time. It is a fast way to perform a PITR (point-in-time recovery). Flashback Transaction (DBMS_FLASHBACK.TRANSACTION_BACKOUT(XID..))  - To reverse a transaction and its related transactions Advanced use cases Flashback technology is integrated into Oracle Recovery Manager (RMAN) and Oracle Data Guard. So, apart from the basic use cases mentioned above, the following use cases are addressed using Oracle Flashback. Block Media recovery by RMAN - to perform block level recovery Snapshot Standby - where the standby is temporarily converted to a read/write environment for testing, backup, or migration purposes Re-instate old primary in a Data Guard environment – this avoids the need to restore an old backup and perform a recovery to make it a new standby. Guaranteed Restore Points - to bring back the entire database to an older point-in-time in a guaranteed way. and so on..I hope this introductory overview helps you understand how Flashback features can be used to investigate and recover from logical errors.  As mentioned earlier, I will take a deeper-dive into to some of the critical Flashback features in my upcoming blogs and address common use cases.

    Read the article

  • Would you expect this error ?

    - by GrumpyOldDBA
    Now I know why, but what I'm thinking is that if I create an error should I get valid data returned? To explain, I was browsing through the dmvs for queries which might benefit from tuning and I identified a query with two clustered index scans ( table scans ). I don't know all the schema off by heart and I was looking for a select by a LoginID column. I assumed this would be numeric and promptly entered an integer value to examine the query plan, yeah I should have looked at the table definition...(read more)

    Read the article

  • Subterranean IL: Fault exception handlers

    - by Simon Cooper
    Fault event handlers are one of the two handler types that aren't available in C#. It behaves exactly like a finally, except it is only run if control flow exits the block due to an exception being thrown. As an example, take the following method: .method public static void FaultExample(bool throwException) { .try { ldstr "Entering try block" call void [mscorlib]System.Console::WriteLine(string) ldarg.0 brfalse.s NormalReturn ThrowException: ldstr "Throwing exception" call void [mscorlib]System.Console::WriteLine(string) newobj void [mscorlib]System.Exception::.ctor() throw NormalReturn: ldstr "Leaving try block" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } fault { ldstr "Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } Return: ldstr "Returning from method" call void [mscorlib]System.Console::WriteLine(string) ret } If we pass true to this method the following gets printed: Entering try block Throwing exception Fault handler and the exception gets passed up the call stack. So, the exception gets thrown, the fault handler gets run, and the exception propagates up the stack afterwards in the normal way. If we pass false, we get the following: Entering try block Leaving try block Returning from method Because we are leaving the .try using a leave.s instruction, and not throwing an exception, the fault handler does not get called. Fault handlers and C# So why were these not included in C#? It seems a pretty simple feature; one extra keyword that compiles in exactly the same way, and with the same semantics, as a finally handler. If you think about it, the same behaviour can be replicated using a normal catch block: try { throw new Exception(); } catch { // fault code goes here throw; } The catch block only gets run if an exception is thrown, and the exception gets rethrown and propagates up the call stack afterwards; exactly like a fault block. The only complications that occur is when you want to add a fault handler to a try block with existing catch handlers. Then, you either have to wrap the try in another try: try { try { // ... } catch (DirectoryNotFoundException) { // ... // leave.s as normal... } catch (IOException) { // ... throw; } } catch { // fault logic throw; } or separate out the fault logic into another method and call that from the appropriate handlers: try { // ... } catch (DirectoryNotFoundException ) { // ... } catch (IOException ioe) { // ... HandleFaultLogic(); throw; } catch (Exception e) { HandleFaultLogic(); throw; } To be fair, the number of times that I would have found a fault handler useful is minimal. Still, it's quite annoying knowing such functionality exists, but you're not able to access it from C#. Fortunately, there are some easy workarounds one can use instead. Next time: filter handlers.

    Read the article

  • Subterranean IL: Exception handler semantics

    - by Simon Cooper
    In my blog posts on fault and filter exception handlers, I said that the same behaviour could be replicated using normal catch blocks. Well, that isn't entirely true... Changing the handler semantics Consider the following: .try { .try { .try { newobj instance void [mscorlib]System.Exception::.ctor() // IL for: // e.Data.Add("DictKey", true) throw } fault { ldstr "1: Fault handler" call void [mscorlib]System.Console::WriteLine(string) endfault } } filter { ldstr "2a: Filter logic" call void [mscorlib]System.Console::WriteLine(string) // IL for: // (bool)((Exception)e).Data["DictKey"] endfilter }{ ldstr "2b: Filter handler" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } } catch object { ldstr "3: Catch handler" call void [mscorlib]System.Console::WriteLine(string) leave.s Return } Return: // rest of method If the filter handler is engaged (true is inserted into the exception dictionary) then the filter handler gets engaged, and the following gets printed to the console: 2a: Filter logic 1: Fault handler 2b: Filter handler and if the filter handler isn't engaged, then the following is printed: 2a:Filter logic 1: Fault handler 3: Catch handler Filter handler execution The filter handler is executed first. Hmm, ok. Well, what happens if we replaced the fault block with the C# equivalent (with the exception dictionary value set to false)? .try { // throw exception } catch object { ldstr "1: Fault handler" call void [mscorlib]System.Console::WriteLine(string) rethrow } we get this: 1: Fault handler 2a: Filter logic 3: Catch handler The fault handler is executed first, instead of the filter block. Eh? This change in behaviour is due to the way the CLR searches for exception handlers. When an exception is thrown, the CLR stops execution of the thread, and searches up the stack for an exception handler that can handle the exception and stop it propagating further - catch or filter handlers. It checks the type clause of catch clauses, and executes the code in filter blocks to see if the filter can handle the exception. When the CLR finds a valid handler, it saves the handler's location, then goes back to where the exception was thrown and executes fault and finally blocks between there and the handler location, discarding stack frames in the process, until it reaches the handler. So? By replacing a fault with a catch, we have changed the semantics of when the filter code is executed; by using a rethrow instruction, we've split up the exception handler search into two - one search to find the first catch, then a second when the rethrow instruction is encountered. This is only really obvious when mixing C# exception handlers with fault or filter handlers, so this doesn't affect code written only in C#. However it could cause some subtle and hard-to-debug effects with object initialization and ordering when using and calling code written in a language that can compile fault and filter handlers.

    Read the article

  • Designing Mobile SMS text advertising system

    - by Ramraj Edagutti
    Currently, I am working on a product where we have an SMS text advertising system, and using this, we setup advertising campaigns for clients, and later these campaigns are sent to the end users. This is very similar to Google Adwords, but targeted to Mobile users via SMS. Just to give an overview of the system Each Campaign is mapped to an advertiser Campaign has start date and end date Campaign has a filter condition(s) or query to select the target user base from our database (to whom we send Campaigns) Target user base can be fixed, for e.g send campaign to 10000 users Target user base can also be dynamic based on query condition, for e.g send campaign to users who are active and from a particular state, district, town etc. (this way user base will be keep changing on daily basis) Campaign can have multiple campaign messages Each campaign message has start date and end date Each campaign message can have multiple message texts for different locales, for e.g English,Hindi,Telugu etc After creating an advertisement campaign, we run daily night job to provision the target user base for that a particular campaign in a separate table, and another daily job runs on morning times and checks provisioned table for campaigns and targeted users and sends the campaign to users via SMS. Problem is, current UI for creating advertising campaigns is designed in a very technical manner, I mean, normal user or business owner or clients can not use the UI to create a campaign. Below are reasons why the UI is very technical in nature Filter condition(s) or query input filed, takes user ids or mobile numbers or SQL queries. Most of times or almost every time, we use big SQL queries So we end up storing SQL queries in a database for a campaign, later we use this SQL query to fetch targeted user base. For scheduling these campaigns, we have input filed on UI which takes quartz cron expression(s) ( for e.g. send campaign on "0 0 9 1-10 MAR 2012" ), again very technical in nature Normal user or business owner, can not use the UI for creating campaigns for reasons mentioned above, Currently, we ourself (developers) helping clients to setup/create campaigns. we are trying to re-design the UI to make it more user friendly so that any user can go to UI and create an advertisement campaign by himself. I am thinking of re-designing the current UI similar to Google Adwords interface, especially for selecting target users based on user geography like country, state, city etc. I also need to select users based user subscription(s), which might make system even more complex. And also, for campaign scheduling, I am thinking of using weekdays with hours. For example, I will shows Monday to Sunday on UI, and user can select the from hours, to hours etc. Any better ideas or suggestion on how to design UI in very user friendly manner and what design should be followed on server side code (we write backend code on java/jpa/spring/quartz)? And I am looking for ideas or design patterns on how to build SQL queries (using JPA/Hinernate) programmatically on server side, based on varies conditions like based on country, state, town, village, and user subscriptions.

    Read the article

  • Showplan Operator of the Week - Merge Interval

    When Fabiano agreed to undertake the epic task of describing each showplan operator, none of us quite predicted the interesting ways that the series helps to understand how the query optimizer works. With the Merge Interval, Fabiano comes up with some insights about the way that the Query optimizer handles overlapping ranges efficiently. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Would you expect this then ?

    - by GrumpyOldDBA
    I've just been working through a database looking for log tables which can have data aged out, or deleted if you like. I was using SSMS for this and used the default right click select top 1000 rows as a quick view of the data. Establishing that I had unwanted data in such a table I deleted based upon getdate()-30 , nearly 7,000 rows deleted, I then highlighted the top 1000 rows query and re-ran query, surpisingly I still got the same result set with 2006 dates in it, and 1,000 rows. A quick...(read more)

    Read the article

  • Showplan Operator of the Week - Lazy Spool

    Continuing to illuminate the depths of SQL Server's Query Optimizer, Fabiano shines a light on the sixth major Showplan Operator on his list: the Lazy Spool. What does the Lazy Spool do that's so special, how does the Query Optimizer use it, and why is it so Lazy? Fabiano explains all...

    Read the article

  • consume a .net webservice using jQuery

    - by Babunareshnarra
    Implementation shows the way to consume web service using jQuery. The client side AJAX with HTTP POST request is significant when it comes to loading speed and responsiveness.Following is the service created that return's string in JSON.[WebMethod][ScriptMethod(ResponseFormat = ResponseFormat.Json)]public string getData(string marks){    DataTable dt = retrieveDataTable("table", @"              SELECT * FROM TABLE WHERE MARKS='"+ marks.ToString() +"' ");    List<object> RowList = new List<object>();    foreach (DataRow dr in dt.Rows)    {        Dictionary<object, object> ColList = new Dictionary<object, object>();        foreach (DataColumn dc in dt.Columns)        {            ColList.Add(dc.ColumnName,            (string.Empty == dr[dc].ToString()) ? null : dr[dc]);        }        RowList.Add(ColList);    }    JavaScriptSerializer js = new JavaScriptSerializer();    string JSON = js.Serialize(RowList);    return JSON;}Consuming the webservice $.ajax({    type: "POST",    data: '{ "marks": "' + val + '"}', // This is required if we are using parameters    contentType: "application/json",    dataType: "json",    url: "/dataservice.asmx/getData",    success: function(response) {               RES = JSON.parse(response.d);        var obj = JSON.stringify(RES);     }     error: function (msg) {                    alert('failure');     }});Remember to reference jQuery library on the page.

    Read the article

  • SCCM 2007 Collections per OU

    - by VirtualizeIT
    Recently I wanted to create our SCCM collections setup as our Active Directory structure. I finally figured out how to create collections per OU of the domain. I decided to create a simple tutorial that may help other IT professionals the steps to complete this task.   1. Open the ConfigMgr and navigation to the collections. To navigate to the collections go to Site Database>Computer Management>Collections. 2. In the ‘Collections’ right-click and select New Collections. Then it will pop up a Wizards so you can enter the name of the collection and any notes that you may want to add that is associated with the collection.                       3. Next, select the database icon. In the ‘Name’ textbox enter the name of the query. I named mine ‘Query’ just for simplicity sake. After you enter the name select ‘Edit Query Statement…’ 4. Select the ‘Criteria’ tab 5. Select the icon that looks like a sun. 6. At this point you should see a dialog box like this…                     7. Next, click the ‘select’ button. 8. Under the ‘Attribute class’ scroll through until you see ’System Resource’ and for the ‘Attribute"’ scroll through you see ‘System OU Name’. It should look something like this…                 9. After that select OK. 10. In the ‘Value’ textbox enter the string that is associated with the OU in your domain. NOTE: If you don’t know your string name for your OU you can simply go to “Active Directory Users and Computers” and right-click on the OU and select properties. In the ‘object’ tab you should see the string under the ‘Canonical name of object”. That is the string that you put in the ‘Value’ text box. 11. After you enter the OU string name press OK>OK>OK>NEXT>NEXT>FINISH.   That’s it!   I hope this tutorial has help you understand how to create a collection through your OU structure.

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >